Problem Detail: Andrew S. Tanenbaum in his book “Modern Operating Systems 3rd Edition” states that the distinction between operating system software and normal (user mode) software can sometimes be blurred in embedded systems(which may not have kernel mode)
Can someone please describe more how does OS without kernel mode looks like and works?
Thank you.
Asked By : Matti
Answered By : Wandering Logic
It’s more like an operating system without a user mode. In many embedded systems there is only one (or a small number) of “applications” and so the application is just built right into the lowest-level of the system. Desktop/server operating systems provide many services, some of them unnecessary for embedded systems. At the lowest level you typically need some wrapper routines for accessing various physical devices (serial buses, sensors, actuators) and interrupt service routines for devices that need asynchronous attention. In a desktop/server OS the device registers or memory mapped locations for accessing the device registers would be protected/hidden from the applications, but in an embedded system you can trust the application writer. As you go up the stack some services may be necessary for a particular embedded system, but those can be provided by library routines. For example, you might need a file system, but you just implement this as a library that provides open, read, write, close, make_directory and such routines to the rest of the system. Your embedded system may (or may not) need to be able to load and execute programs not built into the kernel. Again, you can provide the ability to load the executable part of a file off the disk into memory and then jump to some start location. Your embedded system may (or may not) need dynamic memory allocation. This just means that all the memory not otherwise in use is managed by a library routine (usually called malloc and free or something similar). Your embedded system may (or may not) need multiple threads. Many embedded systems manage the threads cooperatively. That means that there are library routines to allocate and initialize memory for a new thread, and then the library selects one thread to run for awhile and when the thread calls some function (often called yield) this jumps back to the thread management library, which then selects a different thread to run and performs any necessary book-keeping. This can be taken a step further by making the thread library preemptive (an interrupt handling routine calls the library’s yield routine on a timer interrupt if the current thread doesn’t call it itself.) Since you know the apps that are running (and trust their authors) you generally don’t bother with the hardware expense of different protected address spaces. Similarly you generally don’t bother with virtualizing the memory (using the disk or SSD to make the memory look larger than it really is.)
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/23347