vSphere iSCSI Fix – Patch 5 Results > Logs Are Clean!

vSphere iSCSI Fix – Patch 5 Results > Logs Are Clean!

I have received some great feedback from my last few posts regarding the much talked about and highly anticipated patch from VMware. I want to touch base with any Equallogic users that haven’t updated quite yet to let them know that the fix seems to have fixed the problem completely.

I want to quickly discuss my current vmK setup on some of my ESXi hosts. I currently have two hosts configured with 2 vSwitches with 3 vmK’s under each pNic. This was the EQL fix that was pushed out by their support team to help ‘calm’ the logs reported on arrays. Although I had this configuration on these two hosts I still experienced iSCSI connection drops,  just not as frequently. I also have one host with a single vSwitch and 6 vmK’s under a single vSwitch, which is the configuration that EQL/VMware have documented. BOTH configurations are fully supported with EQL/VMware, so if you made the change to your servers and have multiple vSwitches for your iSCSI, you do not NEED to go back to the single vSwitch. The primary reason for a single vSwitch is lower memory overhead from having multiple vSwitches, also it looks cleaner in the GUI. There is no performance difference between the one or the other and vSwitch memory overhead is relatively low. I will be switching my two hosts back to a single vSwitch in the near future as time permits.

I have been following the communities forums closely and haven’t seen anyone post any issues with the patch at this time, and from my testing the patch has not created any new errors or problems.

I feel its safe to say that the patch has fixed the problem completely! It feels like we had to wait forever for the fix (~6 months?!) but thankfully VMware came through with a solid patch that works.

Please feel free  to report your experiences with the patch in the comments!


One thought on “vSphere iSCSI Fix – Patch 5 Results > Logs Are Clean!

  1. Simon

    Indeed same as I am seeing. I updated last week and haven’t seen anything in the logs since. So glad VMware sorted this out for us. I’m currently running with multiple vSwitches too, and will probably look to move back to a single one during a future maintenance period.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Spam Protection by WP-SpamFree