Class 2 CS 202 7 Sep 2023 1. Last time 2. Intro to processes 3. Process's view of memory (and registers) 4. Stack frames 5. System calls 6. Process/OS control transfers 1. Last time - introduced course - discussed purpose of OS, purpose of the class - covered abbreviated history of OSes Today: we will introduce processes. We will also use "a process's view of the world" to do several things: - demystify functional scope - demystify pointers - describe how programmers get certain kind of work out of the system. Some of this may be review from CSO; that's okay, as our perspective will be different. 2. Process - key abstraction: process - motivation: you want your computer to be able to do multiple things at once: - you might find it annoying to write code without being able to do anything else during that time. - or if we have a computer with multiple users, they all need to get things done simultaneously - another motivation: resource efficiency: -- example #1: increase CPU utilization: --->|wait for input|---->|wait for input| gcc----------------> [this is called "overlapping I/O and computation"] -- example #2: reduce latency: A goes for 80 s, B goes for 20 s A-----------> B --> : takes B 100 s run A and B concurrently, say for 10s each, makes B finish faster. [DRAW PICTURE: loader HUMAN --> SOURCE CODE --> EXECUTABLE -----------> PROCESS vi gcc as ld loader HUMAN ---> foo.c ---> foo.s ----> foo.o ---> a.out ----> process NOTE: 'ld' is the name of the linker. it stands for 'linkage editor'. ] classical definition of a process: instance of a running program example: --browser, text editor, word processor, PDF viewer, image processor, photo editor, messaging app - a process can be understood in two ways: from the **process's** point of view today / next class high-level: process sees an abstract machine (more accurately, an abstract machine plus lots of calls for manipulating and interacting with that abstract machine.) from the **OS's** point of view meaning how does the OS implement, or arrange to create, the abstraction of a process? we will deprioritize this for now, and come back later in the course. 3. Process view of memory Background, before we even get to processes: recall from CSO (or your computer architecture class) that some basic elements in a machine (a computer) are: - CPU core (a processor), which consists of * Some execution units (like ALUs) that can perform computation, in response to _instructions_ (addq, subq, xorq ...). Arithmetic and logical instructions typically take one _processor cycle_, or less. * A small number of registers which execution units can read from very quickly (in under a cycle). There are many types of registers. For today, we only need to think about two kinds: ** _General-purpose_ registers. There are 16 of these on the types of machines we are considering (known as the x86-64 architecture): RAX, RBX, RCX, RDX, RSI, RDI, R8-R15, RSP and RBP. RSP is special: it is the _stack pointer_; more on this below. RBP is the base pointer (in Intel terminology); we will often call this the _frame pointer_. ** Special-purpose registers. The only one we consider here is: RIP This is the instruction pointer (also known as program counter). It points to the *next* instruction that the processor will execute, assuming there is no branch or jump to another location. - _Memory_, which takes more time to access than registers (usually 2 to several 100 cycles). [In reality, there are "hierarchies of memory", built from caches, but for today, we will just think of memory as a homogenous resource.] - peripherals (disks, GPUs, ...) Today, our focus is on registers and memory. Three aspects to a process: (i). Each process "has its own registers." What does that mean? It means that while the process is executing (and from the perspective of the programmer who wrote the program that became the process), the process is using those registers mentioned above. This is part of what we mean when we say that a process has the impression that it's executing on an abstract machine. (The reason that this should not be taken for granted is that a given physical machine will generally have multiple processes that are sharing the CPU, and each has "its own" registers, yet each process has no knowledge of the others. The mechanisms by which the hardware and OS conspire to give each process this isolated view are outside of our current scope; we'll come back to this when we study context switches. It relates to process/OS control transfers, discussed at the end.) (What your book calls "direct execution" is the fact that, while the process is "executing", it's using the actual CPU's true registers.) (ii). Each process has its own view of memory, which contains: * the ".text" segment: memory used to store the program itself * the ".data" segment: memory used to store global variables * The memory used by the heap, from which programmer allocates using `malloc()` * The memory used for the stack, which we will talk about in more detail below. process (really, the developer of the program) thinks of memory as a contiguous array: [text/code | data | heap --> <--- stack ] (iii). For a process, very little else is actually needed, but a modern process does have a lot of associated information: --- signal state --- UID, signal mask, controlling terminal, priority, whether being debugged, etc., etc. 4. Stack frames Crash course in x86-64 assembly + stack: syntax: movq PLACE1, PLACE2 means "move 64-bit quantity from PLACE1 to PLACE2". the places are usually registers or memory addresses, and can also be immediates (constants). pushq %rax equivalent to : [ subq $8, %rsp movq %rax, (%rsp) ] popq %rax [ movq (%rsp), %rax addq $8, %rsp ] call 0x12345 [ pseudo: pushq %rip movq $0x12345, %rip] ret [ pseudo: popq %rip ] --above we see how call and ret interact with the stack --call: updates %rip and pushes old %rip on the stack --ret: updates %rip by loading it with stored stack value inside main, you call function f(), what happens.... example: see handout - Consider the state of the stack when arriving at line 23 of example.c (where one is calling f from main). ``` | ... | | previous %rbp | <- base pointer (%rbp) | x (in main) | | arg | <- %rsp | | | | | | ``` - Now consider what happens when the call and the function prologue is executed (line 42 in as.txt). ``` | ... | | previous %rbp | A | x (in main) | | arg | | address of line 24 | | address A ("pr") | <- %rbp | x (in f) | | ptr | <- %rsp "pr" = "previous rbp" (which was actually address A) ``` - Finally, note that the epilogue for f (starting on line 49) does the reverse of the prologue, thus restoring the stack to how it was before. | | | +------------+ | | ret %rip | / +============+ %rbp-> | saved %rbp | \ +------------+ | | | | | local | \ | variables, | >- current function's stack frame | call- | / | preserved | / | regs, | | | etc. |/ %rsp-> +------------+ /* here's a different example, not the one we worked in class */ main(): # set up frame pointer (aka "base pointer") pushq %rbp movq %rsp, %rbp # push any call-clobbered register that we will need onto the # stack, for example: pushq %rcx pushq %r8 pushq %r9 call f # restore call-clobbered registers that we saved. In the above # example, it would be: popq %r9 popq %r8 popq %rcx # epilogue: restore call-preserved registers movq %rbp, %rsp popq %rbp ret f(): # set up frame pointer (aka "base pointer") pushq %rbp movq %rsp, %rbp ... --what happens to a function's state, that is, the registers, when a function is called? they might need to be saved, or not. --purely a matter of convention in the compiler, **not** hardware architecture Unix calling conventions: --on x86-64, *arguments* are passed through registers: %rdi, %rsi, %rdx, %rcx, %r8, %r9 (more than six? then spill to stack). And the *return value* is passed from callee to caller in %rax. // the points here are: - Calling a function requires agreement between caller and callee about how arguments are passed, and which of them is responsible for saving and restoring registers. - In an executing program, the stack is partitioned into a set of stack frames, one for each function. The stack frame for the current function starts at the base pointer and extends down to the stack pointer. ** Stack frames are how functional scope in languages like C are actually implemented -- allowing each function invocation to refer to different variables with the same name. In other words, the programmer thinks they are writing a function with local variables; compiler has arranged to implement that with stack frames. - de-mystifying pointers: a pointer (like "int* foo") is an address. that's it. repeat: a pointer is an address. that address can be: - on the stack - on the heap - in the text section of the program - because of how stack frames work, it's unequivocally a bug to pass a pointer from a prior stack frame. 5. System calls - System calls are the process's main interface to the operating system The set of system calls is the API exposed by the kernel to user programs In other words, **syscalls are the mechanism by which user-level programs ask the operating system to do things for them.** - To the C programmer, a system call looks exactly like a function call: you just issue the function, get a return value, and keep going. - here are some example system calls: int fd = open(const char* path, int flags) write(fd, const void *, size_t) read(fd, void *, size_t) (Aside: fd is a *file descriptor*. This is an abstraction, provided by the operating system, that represents an open file. We'll come back to this later in the course.) - lots more: - you will work with a few system calls in lab2: stat(), readdir(). - you will also work with some system calls in the concurrency lab - on Unix, type "man 2 " to get documentation. 6. Process/OS control transfers - To the C compiler (or the assembly programmer) and the machine as a whole, a system call has some key differences versus function calls (even though both are transfers of control): (i) there is a small difference in calling conventions -- a process knows that when it invokes "syscall", ALL registers (except RAX) are call-preserved. That means that the callee (in this case the kernel) is required to save and restore all registers (except RAX, which is the exception because that is where return values go). (ii) Rather than using the "call" instruction, the process uses a different instruction (helpfully called the `syscall` instruction). This causes privilege levels to switch. The picture looks like this: user-level application | (open) v user-level --------------------------- ^ | kernel-level | |____> [table] open() | ..... | iret --------- - Vocabulary: when a user-level program invokes the kernel via a syscall, it is called *trapping* to the kernel Key distinction: privileged versus unprivileged mode --the difference between these modes is something that the *hardware* understands and enforces --the OS runs in privileged mode --can mess with the hardware configuration --users' tasks run in unprivileged mode --cannot mess with the hardware configuration --the hardware knows the difference between privileged and unprivileged mode (on the x86, these are called ring 0 and ring 3. The middle rings aren't used in the classical setup, but they are used in some approaches to virtualization.) - Overall, there are three ways that the OS (also known as the kernel) is invoked: A. system calls, covered above. B. interrupts. An _interrupt_ is a hardware event; it allows a device, whether peripheral (like a disk) or built-in (like a timer) to notify the kernel that it needs attention. (As we will see later, timers are essential for ensuring that processes don't hog the CPU.) Interrupts are **implicit**: in most cases, the application that was running at the time of the interrupt _has no idea that an interrupt even triggered_, despite the fact that handling the interrupt requires these high-level steps: - process stops running - CPU invokes interrupt handler - interrupt handler is part of kernel code, so kernel starts running - kernel handles interrupt - kernel returns control In other words, from the process's viewpoint, it executed continuously, but an omniscient observer would know perfectly well that the process was in fact _interrupted_ (hence the term). In order to preserve this illusion, the processor (CPU) and kernel have to be designed very carefully to save _all_ process state on an interrupt, and restore all of it. We will discuss the underlying mechanisms for these control transfers later in the course. C. exceptions An _exception_ means that the CPU cannot execute the instruction issued by the processor. Classically (and for this part of the course), you can think of this as "the process did something erroneous" (a software bug): dividing by 0, accessing a bogus memory address, or attempting to execute an unknown instruction. But there are non-erroneous causes of exceptions (an example is demand paging, as we will see in the virtual memory unit). When an exception happens, the processor (the CPU) knows about it immediately. The CPU then invokes an _exception handler_ (code implemented by the kernel). The kernel can handle exceptions in a variety of ways: - kill the process (this is the default, and what is happening when you see a segfault in one of your programs). - signal to your process (this is how runtimes like Java generate null-pointer exceptions; processes _register_ to catch signals). - silently handle the exception (this is how the kernel handles certain memory exceptions, as in the demand paging case). The mechanisms here relate to those for interrupts. [Acknowledgments: Mike Walfish]