Comments on: VMware Stretches ESXi To Be A Disaggregated Memory Hypervisor https://www.nextplatform.com/2021/10/11/vmware-stretches-esxi-to-be-a-disaggregated-memory-hypervisor/ In-depth coverage of high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds. Wed, 20 Oct 2021 12:21:30 +0000 hourly 1 https://wordpress.org/?v=6.7.1 By: Mark Funk https://www.nextplatform.com/2021/10/11/vmware-stretches-esxi-to-be-a-disaggregated-memory-hypervisor/#comment-167327 Wed, 20 Oct 2021 12:21:30 +0000 https://www.nextplatform.com/?p=139440#comment-167327 Timothy – Hopefully I am missing some key concept here. I can see and of course like where you are heading with this, but before its real it strikes me that the hardware first needs to play along. I’m thinking in terms of NUMA-like cross-node enablement in the likes of at least IBM’s Power10 or HPE’s “The Machine” on which a multi-node hypervisor could reside. Keep in mind that what a virtualizer like a hypervisor does is hide the details of hardware from higher levels of code, including the OSes. But the hardware that software expected is nonetheless still there. And that software is expecting to take some higher-level address and use it after translation to access memory anywhere. Sure, memory in a completely different system can be accessed over an I/O link, say enabled via PCI-E, but that access required some hypervisor to actually copy the contents of remote memory into local memory before access (and later put it back). We’re talking about some significant latency that the application software might not have been expecting. To get the kind of access I think you are suggesting, a core on one node needs to provide, after secure hypervisor configuring, a real address known to its node to hardware linking to another node, followed by having the linkage hardware translate that real address for access of the remote node’s memory. The multi-node hypervisor is not involved except in enabling the entirely hardware-based processing. Again, it needs the hardware architecture first and that today is proprietary. Can your secure multi-node hypervisor be created for – say – Power10? Sure, but only within the bounds of a cluster of Power10 nodes. There are other examples – IBM’s CAPI comes to mind – but it’s the systems with these specialized links that first enable the distributed shared memory that I think you are suggesting. Again, I might very well be missing something.

]]>
By: Rob Young https://www.nextplatform.com/2021/10/11/vmware-stretches-esxi-to-be-a-disaggregated-memory-hypervisor/#comment-166929 Fri, 15 Oct 2021 01:13:37 +0000 https://www.nextplatform.com/?p=139440#comment-166929 Shouldn’t DDR latency show .01 us not .1 us? Perhaps tuning helps but latency offsets across memory tiers may be long-term punishment.

]]>
By: Psion Prime https://www.nextplatform.com/2021/10/11/vmware-stretches-esxi-to-be-a-disaggregated-memory-hypervisor/#comment-166867 Wed, 13 Oct 2021 16:35:52 +0000 https://www.nextplatform.com/?p=139440#comment-166867 That’s a lot of reinvention for what already exists. It’s called a mainframe.

]]>
By: stolsma https://www.nextplatform.com/2021/10/11/vmware-stretches-esxi-to-be-a-disaggregated-memory-hypervisor/#comment-166791 Tue, 12 Oct 2021 06:52:07 +0000 https://www.nextplatform.com/?p=139440#comment-166791 Nice developments! Real disaggregation is getting closer and closer!

Intel could buy a PCIe switch vendor or develop the tech themselves because they already have the switching knowledge in-house as can be seen with the Xe-link implementation for Ponte Vecchio, the Barefoot Tofino and the PCIe FPGA chiplets. If they buy, it would probably be to pull the IP from the market and not in the hands of the other direct competitors….

]]>