[Solved]: Difference between hypervisor and exokernel

Problem Detail: An hypervisor (1st type) is a software that creates and run virtual machines, managing guest’s operative systems’s requests to the hardware.
An exokernel is an operative system kernel, that let’s programs access directly to the hardware or, with the support of specific libraries that implements abstactions, run different types of executables for that architecture.

Both the two are an abstraction layer between hardware and application software.
Both the two let you run on the same machine different executables/software/operative systems/whatever
So what’s the difference between the two?

Asked By : Ignus

Answered By : Wandering Logic

The thing that runs on top of a hypervisor is one or more full operating systems. A hypervisor virtualizes the hardware, so that each operating system is “tricked” into believing that it has an entire machine to itself. The engineering genius that goes into hypervisors is how to provide this virtualization at little or no cost (compared to running one operating system directly on the physical hardware.) The thing that runs on top of an exokernel is a bunch of user-level processes. The processes each attach to a library (maybe the same library maybe different libraries) that provide various services and policies. The goal of the exokernel is to provide only protection (keeping one process from using the resources devoted to a different process,) and leave user-level programs to select the policies that were most efficient for them. Exokernel was really a research project whose main output was a series of studies about different kinds of operating system services and the barriers to implementing those services efficiently. Another way to view the Exokernel project was that it was a project to investigate how to make a microkernel that isn’t dreadfully inefficient and slow. One of the big problems with microkernels is that they are constantly “crossing process boundaries” and/or switching back and forth between user mode and kernel mode more frequently than would a monolithic kernel (like Linux.) The cost of switching from user mode to kernel mode or from kernel mode to user mode is surprisingly high (in the 1000s of machine cycles per switch). It’s not clear to me to what degree the Exokernel project succeeded or failed. There are certainly several cases where they came up with surprising new ways to provide resource protection (like their disk model where the operating system manages permissions for disk blocks and the entire file system is managed at user level). They also were able to leverage several of the good ideas from around that time, for example, Carl Waldsburger’s Lottery Scheduling techniques. And there were some cases in which capability-based protection turned out to be pretty efficient. There were other cases where providing policy flexibility dramatically increased the number of kernel boundary crossings and they had to resort to other techniques. For example, in network packet filtering instead of doing the filtering at user level they did it in the kernel, but gave the user program the ability to submit a little program (in a limited (and therefore safe) programming language) that would run in the kernel. Similar ideas in this domain were being investigated around the same time by University of Washington’s SPIN operating system project.
Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/28609

Leave a Reply