Showing posts with label Performance. Show all posts
Showing posts with label Performance. Show all posts

Monday, July 28, 2008

What is PF Usages under Task Manager



I had been asked following by one of the DBA about PF Usages under task manager.

=========================================================================
The reason I am asking..is that the PK usage on one of my boxes is 29 gig..but when I look at what was allocated ..it was on the c drive with 2 gig allocated..so I am confused.

Can you please shed some light?




I was sure what “PF Usage” shows here is some kind of aggregate value so I “googled” for it and found the following explanation on one of the forum

Per Windows Internals Chapter 7: Memory Management -> Page Fault Handling:

Note that the Mem Usage bar (called PF Usage in Windows XP and Windows Server 2003) is actually the system commit total. This number represents potential page file usage, not actual page file usage. It is how much page file space would be used if all the private committed virtual memory in the system had to be paged out all at once.

This makes lots of sense to me and my wild guess was correct.



Saturday, May 17, 2008

Hypervisor and I/O

I was reading blog by Avi who is Kernel Developer and found it very interesting. In his blog he tried to explain how I/O is important for hypervisor and how vendor like VMWare and Xen maintain this hypervisor. Its true that VMWare has their own proprietary hypervisior which means any development or modification can be made ONLY by VMware, where as Xen has open with their hypervisor. That means any kernel developer like Avi can change it to install drivers. Its true hypervisor get I/O hit because all the drivers or communication happen through this I/O. You can read his complete blog here


--------------------------------------------------
I/O performance is of great importance to a hypervisor. I/O is also a huge maintenance burden, due to the large number of hardware devices that need to be supported, numerous I/O protocols, high availability options, and management for it all.
VMware opted for the performance option, but putting the I/O stack in the hypervisor. Unfortunately the VMware kernel is proprietary, so that means VMware has to write and maintain the entire I/O stack. That means a slow development rate, and that your hardware may take a while to be supported.
Xen took the maintainability route, by doing all I/O within a Linux guest, called "domain 0". By reusing Linux for I/O, the Xen maintainers don't have to write an entire I/O stack. Unfortunately, this eats away performance: every interrupt has to go through the Xen scheduler so that Xen can switch to domain 0, and everything has to go through an additional layer of mapping.
Not that Xen solved the maintainability problem completely: the Xen domain 0 kernel is still stuck on the ancient Linux 2.6.18 release (whereas 2.6.25 is now available). These problems have led Fedora 9 to drop support for hosting Xen guests, leaving kvm as the sole hypervisor.
So how does kvm fare here? like VMware, I/O is done within the hypervisor context, so full performance is retained. Like Xen, it reuses the entire Linux I/O stack, so kvm users enjoy the latest drivers and I/O stack improvements. Who said you can't have your cake and eat it?