Switches reverting to OnOff

I have several switches that keep reverting to OnOff mode after I change them to be dimmers. I can’t figure out what’s causing this to happen. It seems to be the switches that I’ve configured as Dimmer switches from the most recent batch of 10 that I installed & none of the ones that I installed previously.

It appears that all of my switches are on the same firmware, 0x01020212 (2.18). I am seeing, however, that the newest switches are using the zha quirks zhaquirks.inovelli.VZM31SN.InovelliVZM31SNv13 while the older ones are using zhaquirks.inovelli.VZM31SN.InovelliVZM31SNv12.

I can’t see how the difference in quirks would affect this, but the diff between v12 and v13 is below in case it helps:

@@ -1,5 +1,5 @@
-class InovelliVZM31SNv12(CustomDevice):
-    """VZM31-SN 2 in 1 Switch/Dimmer Module Firmware version 2.08 and above."""
+class InovelliVZM31SNv13(CustomDevice):
+    """VZM31-SN 2 in 1 Switch/Dimmer Module Firmware version 2.17 and above."""
 
     signature = {
         MODELS_INFO: [("Inovelli", "VZM31-SN")],
@@ -38,6 +38,22 @@ class InovelliVZM31SNv12(CustomDevice):
                     INOVELLI_VZM31SN_CLUSTER_ID,
                 ],
             },
+            3: {
+                PROFILE_ID: zha.PROFILE_ID,
+                DEVICE_TYPE: DeviceType.DIMMER_SWITCH,
+                INPUT_CLUSTERS: [
+                    Basic.cluster_id,
+                    Identify.cluster_id,
+                    Groups.cluster_id,
+                    Scenes.cluster_id,
+                ],
+                OUTPUT_CLUSTERS: [
+                    Identify.cluster_id,
+                    OnOff.cluster_id,
+                    LevelControl.cluster_id,
+                    INOVELLI_VZM31SN_CLUSTER_ID,
+                ],
+            },
             242: {
                 PROFILE_ID: zgp.PROFILE_ID,
                 DEVICE_TYPE: zgp.DeviceType.PROXY_BASIC,
@@ -80,6 +96,17 @@ class InovelliVZM31SNv12(CustomDevice):
                     InovelliVZM31SNCluster,
                 ],
             },
+            3: {
+                PROFILE_ID: zha.PROFILE_ID,
+                DEVICE_TYPE: DeviceType.DIMMER_SWITCH,
+                INPUT_CLUSTERS: [Basic.cluster_id, Identify.cluster_id],
+                OUTPUT_CLUSTERS: [
+                    Identify.cluster_id,
+                    OnOff.cluster_id,
+                    LevelControl.cluster_id,
+                    InovelliVZM31SNCluster,
+                ],
+            },
             242: {
                 PROFILE_ID: zgp.PROFILE_ID,
                 DEVICE_TYPE: zgp.DeviceType.PROXY_BASIC,

Any thoughts on how to dig further?

I am still having this issue and realized that in addition to the switch_mode described above, it also applies to the switch_type parameter (#22) for any of the switches that are set to 0x03, a.k.a. Single Pole Full Sine which gets reset to the default of Single Pole. (And perhaps to 0x01 and 0x02 as well, but I don’t have any switches configured with those settings at the moment.)

These seem to change back to their default values more frequently after a restart of Home Assistant. (Part of the reason here could be that the ZHA initialization sequence is reading these attributes when initializing instead of just getting a cached value. So it updates right after startup even though it may have changed sometime before & just never been reported back to the coordinator/ZHA/Home Assistant. Is this all suggesting a bug somewhere in the switch firmware?)

I enabled ZHA logging and found that my older switches are getting the v12 quirks due to not having the endpoint #3:

2025-03-30 16:03:30.106 DEBUG (MainThread) [zigpy.quirks.registry] Checking quirks for Inovelli VZM31-SN (8a:7f:12:72:ae:21:b2:13)
2025-03-30 16:03:30.106 DEBUG (MainThread) [zigpy.quirks.registry] Considering <class 'zhaquirks.inovelli.VZM31SN.InovelliVZM31SN'>
2025-03-30 16:03:30.106 DEBUG (MainThread) [zigpy.quirks] Fail because endpoint list mismatch: {1, 2} {1, 2, 242}
2025-03-30 16:03:30.106 DEBUG (MainThread) [zigpy.quirks.registry] Considering <class 'zhaquirks.inovelli.VZM31SN.InovelliVZM31SNv9'>
2025-03-30 16:03:30.106 DEBUG (MainThread) [zigpy.quirks] Fail because endpoint list mismatch: {1, 2} {1, 2, 242}
2025-03-30 16:03:30.106 DEBUG (MainThread) [zigpy.quirks.registry] Considering <class 'zhaquirks.inovelli.VZM31SN.InovelliVZM31SNv10'>
2025-03-30 16:03:30.107 DEBUG (MainThread) [zigpy.quirks] Fail because endpoint list mismatch: {1, 2} {1, 2, 242}
2025-03-30 16:03:30.107 DEBUG (MainThread) [zigpy.quirks.registry] Considering <class 'zhaquirks.inovelli.VZM31SN.InovelliVZM31SNv11'>
2025-03-30 16:03:30.108 DEBUG (MainThread) [zigpy.quirks] Fail because input cluster mismatch on at least one endpoint
2025-03-30 16:03:30.108 DEBUG (MainThread) [zigpy.quirks.registry] Considering <class 'zhaquirks.inovelli.VZM31SN.InovelliVZM31SNv12'>
2025-03-30 16:03:30.108 DEBUG (MainThread) [zigpy.quirks] Device matches filter signature - device ieee[8a:7f:12:72:ae:21:b2:13]: filter signature[{
     'models_info': [('Inovelli', 'VZM31-SN')],
     'endpoints': {
        1: {
            'profile_id': 260,
            'device_type': <DeviceType.DIMMABLE_LIGHT: 257>,
            'input_clusters': [0, 3, 4, 5, 6, 8, 1794, 2820, 2821, 64561, 64599],
            'output_clusters': [25]
        },
        2: {
            'profile_id': 260,
            'device_type': <DeviceType.DIMMER_SWITCH: 260>,
            'input_clusters': [0, 3, 4, 5],
            'output_clusters': [3, 6, 8, 64561]},
           
        242: {
            'profile_id': 41440,
            'device_type': <DeviceType.PROXY_BASIC: 97>,
            'input_clusters': [],
            'output_clusters': [33]
        }
    }
}]
2025-03-30 16:03:30.108 DEBUG (MainThread) [zigpy.quirks.registry] Found custom device replacement for 8a:7f:12:72:ae:21:b2:13: <class 'zhaquirks.inovelli.VZM31SN.InovelliVZM31SNv12'>

And my newer switches are getting v13 because they do define endpoint #3:

2025-03-30 16:03:30.230 DEBUG (MainThread) [zigpy.quirks.registry] Checking quirks for Inovelli VZM31-SN (8e:b1:33:f1:cf:c9:35:8a)
2025-03-30 16:03:30.230 DEBUG (MainThread) [zigpy.quirks.registry] Considering <class 'zhaquirks.inovelli.VZM31SN.InovelliVZM31SN'>
2025-03-30 16:03:30.230 DEBUG (MainThread) [zigpy.quirks] Fail because endpoint list mismatch: {1, 2} {242, 1, 2, 3}
2025-03-30 16:03:30.230 DEBUG (MainThread) [zigpy.quirks.registry] Considering <class 'zhaquirks.inovelli.VZM31SN.InovelliVZM31SNv9'>
2025-03-30 16:03:30.230 DEBUG (MainThread) [zigpy.quirks] Fail because endpoint list mismatch: {1, 2} {242, 1, 2, 3}
2025-03-30 16:03:30.230 DEBUG (MainThread) [zigpy.quirks.registry] Considering <class 'zhaquirks.inovelli.VZM31SN.InovelliVZM31SNv10'>
2025-03-30 16:03:30.231 DEBUG (MainThread) [zigpy.quirks] Fail because endpoint list mismatch: {1, 2} {242, 1, 2, 3}
2025-03-30 16:03:30.231 DEBUG (MainThread) [zigpy.quirks.registry] Considering <class 'zhaquirks.inovelli.VZM31SN.InovelliVZM31SNv11'>
2025-03-30 16:03:30.231 DEBUG (MainThread) [zigpy.quirks] Fail because endpoint list mismatch: {1, 2, 242} {242, 1, 2, 3}
2025-03-30 16:03:30.231 DEBUG (MainThread) [zigpy.quirks.registry] Considering <class 'zhaquirks.inovelli.VZM31SN.InovelliVZM31SNv12'>
2025-03-30 16:03:30.231 DEBUG (MainThread) [zigpy.quirks] Fail because endpoint list mismatch: {1, 2, 242} {242, 1, 2, 3}
2025-03-30 16:03:30.231 DEBUG (MainThread) [zigpy.quirks.registry] Considering <class 'zhaquirks.inovelli.VZM31SN.InovelliVZM31SNv13'>
2025-03-30 16:03:30.231 DEBUG (MainThread) [zigpy.quirks] Device matches filter signature - device ieee[8e:b1:33:f1:cf:c9:35:8a]: filter signature[{
     'models_info': [('Inovelli', 'VZM31-SN')],
     'endpoints': {
        1: {
            'profile_id': 260,
            'device_type': <DeviceType.DIMMABLE_LIGHT: 257>,
            'input_clusters': [0, 3, 4, 5, 6, 8, 1794, 2820, 2821, 64561, 64599],
            'output_clusters': [25]
        },
        2: {
            'profile_id': 260,
            'device_type': <DeviceType.DIMMER_SWITCH: 260>,
            'input_clusters': [0, 3, 4, 5],
            'output_clusters': [3, 6, 8, 64561]},
        3: {
            'profile_id': 260,
            'device_type': <DeviceType.DIMMER_SWITCH: 260>,
            'input_clusters': [0, 3, 4, 5],
            'output_clusters': [3, 6, 8, 64561]
        },
        242: {
                'profile_id': 41440,
                'device_type': <DeviceType.PROXY_BASIC: 97>,
                'input_clusters': [],
                'output_clusters': [33]
            }
        }
    }]
2025-03-30 16:03:30.231 DEBUG (MainThread) [zigpy.quirks.registry] Found custom device replacement for 8e:b1:33:f1:cf:c9:35:8a: <class 'zhaquirks.inovelli.VZM31SN.InovelliVZM31SNv13'>

Both the old and new switches are on the same firmware version, v2.18 / 0x01020212. Shouldn’t the older switches also have the new endpoint #3 as well?

I don’t use ZHA, but I’m pretty sure it doesn’t support reinterviewing any end devices (needed to pick up new endpoints). You’ll need to unpair and pair the switches again to get them to pick up the new endpoints.

Ok, that worked. I’ll have to wait now to see if the same issue appears on these older switches when they’re using the new quirks version. (If they do, that’ll pretty clearly point to the issue being in the software stack.)

I’ve updated all of the switches now so that they’re all running the same version and am seeing the same behavior for switch_mode and switch_type on the same switches where it was an issue before. I did notice, however, that one of the switches that was having problems before was on quirks version zhaquirks.inovelli.VZM31SN.InovelliVZM31SNv12. The combination of these two points seems to suggest it has nothing to do with the quirk. (I dug into the source code before and felt this was likely the case).

For the switch_type, I don’t actually care between the value of Single Pole vs. Single Pole Full Sine, so I’m updating all of the switches that were having this issue for the switch_type to be Single Pole. Obviously this will resolve part of the issue for me, but it may be masking a true issue with the switch. I’ll continue to look into this if anyone has similar experiences or ideas on what to do, though.

I’ve also started a new discussion since I think it may be a firmware issue, and may get more help if the thread has the right topic.