2024-1学习思考点滴汇总
Preface
我很喜欢姜夔诗《暗香》中的一句词:“等恁时、重觅幽香,已入小窗横幅”。身处在许多个“当时”的我并未能很好地把握好当下,诚然这有时间的局限性的原因,没法以全局的视角统筹好每一步,但我是可以做到的是不懈坚持与难以被外界干扰所消磨的热爱呀。“年年陌上生秋草”,待到回首,感觉自己并没有做什么,实现什么。可是再一步步往前看,却这学期的收获是要比上学期多得多的,但遗憾的还是没能完成自己制定好的计划和目标。
以下汇总了我在每日学习中的一些疑问与思考,它们对我来说意义重大,是我这一学期一点一滴收集起来的困惑、灵感与喜悦。我将它们按照特定的话题分类,在我以后有同样疑问时能快速地找回以前的答案,同时也不断以此不断勉励自己——不断思考、保持热爱!
软硬件技术
How the web works
实用的例子: How the web works - Learn web development | MDN (mozilla.org)
网站是如何构建起来的?bilibili
Principle of portability characteristic in Electron
从 Electron 架构出发,深究 Electron 跨端原理 | 多图详解跨平台桌面应用Electron想必大家都不 - 掘金
需要了解的工具
GUI-What is Qt
Qt is a cross-platform application development framework for desktop, embedded and mobile.
Qt is not a programming language on its own. It is a framework written in C++. A preprocessor, the MOC (Meta-Object Compiler), is used to extend the C++ language with features like signals and slots. Before the compilation step, the MOC parses the source files written in Qt-extended C++ and generates standard compliant C++ sources from them. Thus the framework itself and applications/libraries using it can be compiled by any standard compliant C++ compiler like Clang, GCC, ICC, MinGW and MSVC.
GCC(GNU Compiler Collection) verus Clang/LLVM
GCC与Clang / LLVM:C / C ++编译器的深度比较 - findumars - 博客园 (cnblogs.com)
三种主流C++编译器
Visual C ++,GNU编译器集合(GCC)和Clang /低级虚拟机(LLVM)是业界三种主流的C / C ++编译器。Visual C ++提供了图形用户界面(GUI),易于调试,但不适用于Linux平台。因此,本文主要比较GCC与Clang / LLVM。
GCC是GNU开发的一种程序语言编译器。它是根据GNU通用公共许可证(GPL)和GNU较小通用公共许可证(LGPL)发布的一组免费软件。它是GNU和Linux系统的官方编译器,也是用于编译和创建其他UNIX操作系统的主要编译器。
LLVM包含一系列模块化的编译器组件和工具链。它可以在编译,运行时和空闲时间优化程序语言和链接,并生成代码。LLVM可以作为多种语言的编译器的背景。Clang是一种C,C ++,Objective-C或Objective-C ++编译器,它基于LLVM用C ++编译,并根据Apache 2.0许可发行。Clang主要用于提供优于GCC的性能。
Difference between Cmake and Make
四大件学习中的知识点(主要是OS)
Command interpreter的实现
Two approaches to implement a command interpreter
- command interpreter contains the code to execute the command
Embed the code needed to execute a command directly within the command interpreter.This method has the advantage of faster execution (think about why?–>context switch!) since the command interpreter can immediately access the necessary code without relying on external system programs. However, a potential downside is that it may lead to a larger and more complex command interpreter, as it must accommodate the code for all possible commands. - Implement most commands through system programs (kernel state?)
This method offers the advantage of modularity, as each command is independent and can be updated or modified without affecting the command interpreter. However, the trade-off for this modularity is that the command interpreter must rely on external programs to execute commands, which may result in slower performance and increased resource usage, especially if multiple system programs are involved (frequent context switching and mode switching: user mode -> kernel mode).
syscall
Q: A system call is a process? Difference between process and routine, how latter one execute?
A: No, a system call is not a process. Instead, a system call is a mechanism that allows a program (or process) to request a service from the operating system’s kernel, such as accessing hardware resources, creating files, or managing processes. In essence, system calls are indeed pieces of compiled code that the OS kernel provides to handle specific, privileged tasks safely and efficiently.
视频资源:99%的开发者并不理解系统调用? | 系统调用 内核模式 用户模式_哔哩哔哩_bilibili
Bootloader
Q: When a CPU recieves a reset event, the instruction register is loaded with a predefined memory location, at which is the initial bootstrap program. Then CPU execute the boot loader to load the kernel to the main memory. So why not directly load the kernel to memory when start up?
How the bootstrap program is loaded and then be executed? Execeute the small piece of code in firmware thoroughly or then load the complete bootstrap into main memory and start its execution?
A: The bootstrap loader is essential for the startup process because, at power-on or reset, the CPU has very limited capability—it doesn’t “know” where or how to load the full operating system. Here’s how it all works:
- Directly Loading the Kernel: Loading the kernel directly at startup would mean the CPU would have to know exactly where and how the kernel is stored. But at the time of powering on, the CPU has no information about the storage device (like a hard disk or SSD) or how to access it. It only has a small set of instructions and memory address where it starts executing upon reset. Hence, we use a bootstrap loader.
- Bootstrap Process: When the CPU starts up, it immediately begins executing code from a fixed address, often set to the Basic Input/Output System (BIOS) or similar firmware in modern computers. This code is very minimal and is part of a read-only memory (ROM). This firmware contains a tiny program known as the initial bootstrap loader, which is responsible for performing the initial checks and locating a more complex loader (usually on a storage device) that can load the OS.
- How the Bootstrap Program is Loaded: The initial bootstrap code in firmware runs completely from ROM. This code performs essential checks (like the Power-On Self Test) and then searches for a bootable device. Once it locates the storage device containing the bootloader, it loads a small portion of the bootloader into memory.
- From Firmware to Main Memory Execution: The small firmware-based bootstrap then loads the complete bootloader (usually stored in the Master Boot Record or MBR) into main memory. This complete bootloader now has the necessary instructions to locate and load the kernel, which then takes over and starts initializing the operating system.
In summary: The initial bootstrap in ROM is essential because the CPU lacks any knowledge of storage at startup. This small, firmware-resident code loads a more capable bootloader, which then brings in the OS kernel.
Bootstrap program v.s. Bootloader
The bootstrap program (or initial bootstrap) and the bootloader are indeed separate parts of the startup process, and they each play a specific role in getting the OS loaded. Here’s a quick recap:
- Bootstrap Program:
- Location: Stored in ROM (often part of the firmware, like BIOS or UEFI).
- Purpose: Executes first, performing initial system checks and finding a bootable device.
- Function: Loads the bootloader from the storage device (e.g., hard disk, SSD) into main memory.
- Execution: The CPU automatically starts here when powered on or reset.
- Bootloader:
- Location: Stored on a bootable storage device, like in the Master Boot Record (MBR) or a dedicated partition.
- Purpose: Loads the operating system kernel into main memory.
- Execution: Once the bootstrap program loads it into RAM, it can execute more complex instructions to locate and load the OS kernel.
- Kernel and OS Startup:
- Once the bootloader has done its job and loaded the kernel into memory, it transfers control to the kernel.
- At this point, the operating system officially begins to run, taking over full control to initialize and manage hardware, load essential services, and provide an environment for user applications.
So the sequence is: bootstrap program (in ROM) → bootloader (from storage) → kernel (OS). After the kernel loads, the OS is running!
What is ABI (Application Binary Interface)?
From wikipedia:
In computer software, an application binary interface (ABI) is an interface between two binary program modules. Often, one of these modules is a library or operating system facility, and the other is a program that is being run by a user.
An ABI defines how data structures or computational routines are accessed in manchin code, which is a low-level, hardware-dependent format. In contrast, an application programming interface (API) defines this access in source code, which is a relatively high-level, hardware-independent, often human-readable format.
Routine and Process
Routine:
- A routine (also called a function, procedure, or subroutine) is a block of reusable code designed to perform a specific task.
- It runs within the context of a single process.
- It is invoked by other parts of a program and executes in the same memory space as the program. – spot on!
- Examples include standard library functions like
printf()
in C or a custom function likecalculateSum()
.
Process:
- A process is an instance of a program that is in execution.
- It is a larger execution unit that includes the program’s code, data, and resources (e.g., memory, file handles).
- Processes run independently and are managed by the operating system.
- They often communicate with other processes through mechanisms like inter-process communication (IPC).
端口转发与NAT
You’re absolutely right that a NAT gateway inherently performs address translation and reverse mapping as part of its normal operations. Your understanding of how the gateway matches incoming packets to the correct LAN node based on its NAT table is spot-on. Let me clarify where port forwarding fits into the picture and why it’s necessary in some cases.
NAT Basics Recap
- Outbound traffic: When a LAN node (e.g.,
192.168.1.100
) sends a packet to an external server, the gateway translates the source IP (192.168.1.100
) and port (e.g.,10
) to its own external IP (1.1.1.1
) and a unique port (e.g.,3000
). This creates an entry in the NAT table that maps(1.1.1.1:3000)
to(192.168.1.100:10)
. - Inbound traffic: When the external server responds to
(1.1.1.1:3000)
, the gateway checks its NAT table, finds the mapping, and forwards the packet to(192.168.1.100:10)
.
This works seamlessly for connections initiated from within the LAN because the NAT table is populated dynamically when the outbound connection is made.
The Role of Port Forwarding
Port forwarding is needed for connections initiated from outside the LAN (e.g., an external client trying to access a server or service running inside the LAN). Here’s why:
- No NAT Table Entry for Unsolicited Traffic:
- If an external client sends a packet to
1.1.1.1
with no prior outbound connection from the LAN, the gateway won’t have a corresponding NAT table entry. - The gateway doesn’t know which internal node to forward the packet to, so it drops the packet by default.
- If an external client sends a packet to
- Port Forwarding as a Manual Mapping:
- Port forwarding creates a static rule in the gateway to map incoming packets with a specific port on the gateway’s external IP to a specific internal node and port.
- For example, you can configure the gateway to forward traffic on
1.1.1.1:8080
to192.168.1.100:80
.
- Use Case:
- Say you’re hosting a web server on
192.168.1.100
(LAN) and want clients from the internet to access it. Without port forwarding, their requests will be dropped because the gateway has no NAT table entry. - By setting up port forwarding, the gateway knows to forward all packets received on
1.1.1.1:8080
to192.168.1.100:80
, allowing external clients to reach the web server.
- Say you’re hosting a web server on
How Port Forwarding Differs from NAT
- Dynamic vs. Static:
- NAT dynamically creates mappings as a result of outgoing connections.
- Port forwarding involves static, predefined mappings for incoming connections.
- Purpose:
- NAT primarily enables multiple LAN nodes to share a single public IP for outbound traffic.
- Port forwarding enables specific services inside the LAN to be accessible from the outside.
Analogy: NAT vs. Port Forwarding
Think of NAT as a receptionist in an office building:
- Outgoing call: The receptionist notes which employee is making the call and forwards it to the recipient. When the recipient calls back, the receptionist connects them to the correct employee.
- Incoming call without prior contact: If someone calls the office without knowing who to talk to, the receptionist doesn’t know where to direct the call unless there’s a predefined rule (“All calls to extension 8080 go to Bob”).
Port forwarding is like setting up such predefined rules for incoming calls.
进程的返回
Q: A program ends when it execute to return
?
A: A break down the entire process of executing a simple “Hello, World!” program, from when the kernel loads the program to when it gets the exit status back. This includes the key functions like execve
, __libc_start_main
, and _exit
.
1. Source Code
The simplest “Hello, World!” program in C:
1 |
|
2. Compilation
When you compile this program (e.g., gcc hello.c -o hello
), the following steps occur:
- Translation:
- The source code is converted into an object file (
hello.o
) with machine code.
- The source code is converted into an object file (
- Linking:
- The linker combines
hello.o
with the C runtime and standard library (e.g.,libc.so
). - The resulting binary includes references to external shared libraries (if dynamically linked) or embeds those libraries if statically linked.
- The linker combines
3. Program Execution
Assuming the compiled program is called hello
, when you run ./hello
, here’s the step-by-step breakdown:
Step 1: The Kernel Loads the Progra
execve
System Call:- The shell (or another parent process) calls the
execve
system call to start your program:
execve("./hello", argv, envp);
- The kernel:
- Reads the program’s ELF header to understand its structure.
- Maps the program’s sections (e.g., code, data, etc.) into memory.
- Sets up the process’s stack with
argv
andenvp
.
- The shell (or another parent process) calls the
- Transfer Control to
_start
:_start
is the entry point of the program, specified in the ELF file. It is provided by the C runtime and is the very first code executed in your process.
Step 2: _start
Initializes the Program
- Set Up the Environment
_start
is a small assembly stub that:- Sets up the stack and registers.
- Prepares
argc
,argv
, andenvp
for passing to__libc_start_main
.
- Call
__libc_start_main
:_start
calls__libc_start_main
(part oflibc
), passing the address ofmain()
and other parameters:
__libc_start_main(main, argc, argv, ...);
Step 3: __libc_start_main
Orchestrates Execution
- Initialize the Runtime:
__libc_start_main
does several things before callingmain()
:- Initializes the heap and other runtime features.
- Runs constructors for global objects (if using C++).
- Sets up dynamic linking (if needed).
- Call
main()
:- After initialization,
__libc_start_main
callsmain()
:
int exit_code = main(argc, argv);
- After initialization,
Step 4: main()
Executes User Code
- In our case,
main()
callsprintf
:
printf("Hello, World!\n");
printf
formats the string and writes it to the standard output buffer.
- After
printf
finishes,main()
returns 0.
Step 5: Cleanup and Exit
- Return to
__libc_start_main
:- After
main()
returns, control goes back to__libc_start_main
.
- After
- Call
exit()
:__libc_start_main
calls theexit()
function with the return value frommain()
:
exit(exit_code);
- Perform Cleanup:
exit()
:- Runs functions registered with
atexit()
. - Flushes and closes all open streams.
- Calls
_exit()
to terminate the process.
- Runs functions registered with
- Call
_exit()
:_exit()
makes a system call (exit_group
orexit
) to notify the kernel that the process is terminating:
syscall(SYS_exit_group, exit_code);
Step 6: The Kernel Cleans Up
- The kernel performs final cleanup:
- Reclaims memory and other resources used by the process.
- Marks the process as terminated.
- The kernel updates the parent process (e.g., the shell) with the exit status of the terminated program.
Summary with Key Functions
Here’s the entire process mapped to the key functions:
1 | 1. Parent process calls `execve("./hello", ...)` -> Kernel loads the ELF binary. |
This process illustrates how a simple program involves several layers of initialization, execution, and cleanup, seamlessly transitioning between user-level code and kernel-level actions.
Core dumped
What is a Core Dump?
A core dump is a file (snapshot) that captures the memory state of a running process at a specific point in time, usually when the program crashes due to a severe error like a segmentation fault. It contains:
- Memory Contents: The contents of the program’s memory (stack, heap, and data segments) at the time of the crash.
- Registers: The values in CPU registers.
- Execution Context: Information about the program’s execution, such as the program counter and the instruction that caused the fault.
- Other Metadata: Details about the process, such as environment variables, command-line arguments, and signal information.
Why Does a Core Dump File Need to Be Generated?
A core dump is ideal when:
- The program is no longer running: You can’t attach a debugger because the process has terminated.
- The crash is hard to reproduce: Core dumps provide a snapshot of the fault, so you don’t need to recreate the conditions leading to the crash.
- Sharing Debug Information: You can send the core dump to someone else (e.g., another developer or a support team) for analysis.
Is a Core Dump Necessary?
A core dump file is useful but not strictly necessary for debugging a program crash. It depends on your situation and debugging needs.
Use Core Dumps When:
- The process has already crashed and terminated.
- The crash is difficult to reproduce.
- You need to analyze the fault on a different machine or share debugging data.
Use GDB Without Core Dumps When:
- You can reproduce the issue easily in your environment.
- You want to interactively explore the program’s state (e.g., set breakpoints before the crash).
- You’re debugging a long-running or server process where capturing a live snapshot is more efficient than generating a dump.
User-level and Kernel-level Multithreading
The key difference between explicitly user-level thread libraries (e.g., pthread
or windows.h
) and implicitly kernel-level thread abstractions (e.g., thread pools, OpenMP, or GCD) lies in control granularity and abstraction level, which affects how threads are managed and who is responsible for managing them.
1. User-Level Thread Libraries (Explicit Control)
Examples: pthread
(POSIX threads), windows.h
(Windows threading API)
Characteristics:
- Explicit Thread Management:
The programmer directly creates, manages, and synchronizes threads using APIs likepthread_create
,pthread_join
, orCreateThread
. - Fine-Grained Control:
The library exposes lower-level primitives, allowing the programmer to:- Decide when and how to create threads.
- Explicitly synchronize threads with mutexes, condition variables, etc.
- Handle thread termination and resource cleanup.
- User-Space Scheduling:
If implemented as purely user-level threads (like in the Many-to-One model), the kernel may not even be aware of these threads, and the thread library handles scheduling in user space. This provides lightweight thread management but can suffer from blocking issues.
2. Kernel-Level Thread Libraries (Implicit Abstractions)
Examples: Thread pools, OpenMP (omp.h
), Grand Central Dispatch (GCD)
Characteristics:
- Higher-Level Abstractions:
These libraries or frameworks hide most of the low-level thread management details from the programmer. Instead of directly managing threads, you typically submit task or use parallel constructs, and the system determines how threads are allocated. - Kernel-Managed Threads:
These abstractions often rely on kernel threads for execution, meaning the kernel scheduler handles thread creation, termination, and context switching. - Dynamic Resource Management:
They dynamically adjust thread usage to match the available hardware resources (e.g., CPU cores) and workload. For example:- Thread pools reuse threads to minimize thread creation and destruction overhead.
- OpenMP dynamically distributes work across threads with constructs like
#pragma omp parallel for
. - GCD (on Apple platforms) uses queues to schedule tasks onto kernel threads efficiently.
Why They “Seem the Same” to Programmers
From a usability perspective, they may feel similar because:
- Both allow concurrent execution.
- The higher-level abstractions are designed to make concurrency easier, hiding the underlying complexity.
However, the level of abstraction and degree of control are vastly different. If you’re usingpthread
, you’re explicitly in charge of the threads, while with something like OpenMP or GCD, you’re simply defining tasks, and the framework/library manages everything else.
一些概念
What is Node.js
Node.js is an open-source and cross-platform JavaScript runtime environment, used for executing JavaScript code outside of a web browser.
There are a number of characteristics that make Node.js what it is:
- Google Chrome V8 JavaScript Engine: This runtime environment is built on the Google Chrome V8 JavaScript runtime engine. In the same way a Java Virtual Machine translates bytecode, the Chrome V8 JavaScript engine takes JavaScript and makes it readable.
- Modules/Packages: Node.js has npm, a node package manager, with a library of over 350,000 packages to help get your project or application off the ground with efficiency and ease.
- Event Driven, Single-Threaded I/O Model: JavaScript relies on user interactions or events to run. In most cases, code is run synchronously. Server requests and other such asynchronous tasks rely on a system of promises or async/await functions to handle these inputs and outputs.
What is Sandbox
In computer security, a sandbox is a security mechanism for separating running programs, usually in an effort to mitigate system failures and/or software vulnerabilities from spreading.
The “sandbox” metaphor derives from the concept of a child’s sandbox—a play area where kids can build, destroy, and experiment without causing any real-world damage.
Language Server Protocol (LSP)
In the context of the Language Server Protocol (LSP), “client” and “server” refer to the two main components involved in the communication and execution of language-related tasks.
- Client: The client is typically an editor or integrated development environment (IDE) like Visual Studio Code, Vim, or any other text editor that supports LSP. The client is responsible for initiating requests for language features such as autocomplete, syntax highlighting, go-to-definition, and error-checking. In short, the client sends requests to the server to receive language-specific functionality and displays the results to the user.
- Server: The server is the language server, which provides language-specific information and features. It could be a standalone application or a process initiated by the client. The server responds to client requests by providing data and functionalities like code completion, diagnostics, and symbol information based on the programming language it’s tailored for (e.g., Python, JavaScript, or C++). The server operates by analyzing the code, managing the workspace, and returning relevant information back to the client.
This client-server architecture in LSP enables any editor that implements an LSP client to interact with multiple language servers, making it a highly flexible and language-agnostic solution for language support in editors.
一些思考
很多时候, 你会觉得理解某一个知识点是一件简单是事情, 但当你真正动手实践的时候, 你才发现你的之前的理解只是停留在表面.
One of the most important spirits of young people like you is to try new things to bade farewell to the past.
Remember, learn to use
man
, learn to use everything. RTFM
无论何人要认识什么事物,除了同那个事物接触,即生活于(实践于)那个事物的环境中,是没有法子解决的。
“秀才不出门,全知天下事”,在技术不发达的古代只是一句空话,在技术发达的现代虽然可以实现这句话,然而真正亲知的是天下实践着的人,那些人在他们的实践中间取得了“知”,经过文字和技术的传达而达到于“秀才”之手,秀才乃能间接地“知天下事”。如果要直接地认识某种或某些事物,便只有亲身参加于变革现实、变革某种或某些事物的实践的斗争中,才能触到那种或那些事物的现象,也只有在亲身参加变革现实的实践的斗争中,才能暴露那种或那些事物的本质而理解他们。 ——《实践论》
实践,认识,再实践,再认识。 ——《实践论》
- 感性到理性、认知向理论发展的过程
- 理论认知和实践经验的辩证关系
- 认知和实践的局限性、运动性
“某一历史时期下的某一实践过程”