Segmentation in Operating systems is a concept that’s as old as time itself. As per my professor at least, most modern Operating systems have ditched the concept of Segmentation and now rely mostly on Paging to implement memory protection and thus prevent each process from accessing any other memory apart from its own. How then do we still get "Segmentation Faults" in C. Do we somehow still have Segmentation as an abstract concept in modern operating systems?
It’s not a thing in C specifically, it’s a thing in a Unix-like OS. Any language that’s not memory-safe (makes it possible to try to access an unmapped page) can compile to an executable that segfaults. Including hand-written assembly or Fortran. But yes, C and C++ are two of the most widely used languages that aren’t memory-safe.
And yes, the name is archaic; Unix is old and there was no need to rename
SIGSEGV as the signal the kernel delivers when user-space makes the CPU fault by accessing memory it didn’t map. And doing so would have broken lots of code that used that constant, and just changing the English text string for
perror for it to "invalid page fault" also wouldn’t have been particularly helpful, although possible since those messages are I think baked into libc. But then different libc versions would have different messages for years around the changeover, not worth the trouble.
In a system using paging, any access to a page that’s "not present", or only readable when you’re trying to write or whatever, causes the CPU to take an exception. The kernel’s page-fault exception handler checks if the page should be accessible, and if so pages it in from disk, does copy-on-write, or whatever. (major or minor page fault respectively). If not, the page fault is "invalid", and the kernel delivers a SIGSEGV signal to the process.
Similarly archaic is
SIGFPE (Floating Point Exception) for arithmetic exceptions in general, the only one of which that can actually fault by default on most machines is integer division. (The default FP environment has all FP exceptions masked, so they just set sticky flags instead raising an exception in machine code.) The POSIX standard requires that if a signal is going to be delivered because of an arithmetic exception, it must be SIGFPE.
Similarly, by now
SIGSEGV has been standardized by
POSIX and other Unix standards, so the time in very early Unix days when anyone could have plausibly changed it has long passed.
(Some systems can also deliver
SIGBUS for other kinds of bad-address errors, e.g. Solaris on SPARC delivers
SIGBUS for unaligned access.)
Also note that some other kinds of permission errors are overloaded onto SIGSEGV. For example, trying to execute a privileged instruction like x86
lgdt under a Linux results in a SIGSEGV. (In that case, user-space would literally be trying to take over the segmentation mechanism that is still used to define what mode the CPU operates in, e.g. 16 vs. 32 vs 64-bit code segment in long mode.) Same for misaligned SSE SIMD instructions. So it’s not strictly for invalid page faults.
Executables also have "segments" like text and data, where the .text and .data sections are linked respectively. "The heap" used to be mostly contiguous, growing after .data / .bss (via
brk system calls, before
mmap(MAP_ANONYMOUS) or mapping pages from /dev/zero was a thing), so it’s possible the "segmentation fault" term didn’t seem like such nonsense to the designers even after OSes started using paging instead of CPU segmentation for memory protection, because the "segments" of an executable still got mapped to contiguous ranges of pages in the memory image of a process.
I don’t know the history details around naming of Unix signals vs. its development on PDP-8 and PDP-11 hardware with or without memory-protection features, although apparently some models of PDP-11 had some form of memory protection, and even virtual memory
There are two hard problems in computer science: Cache invalidation, and naming things, and off-by-one errors.