Posts

CST 334 - Week 8

I guess this journal is more about what I learned over the whole course rather than just this past week. Looking back, I’d say I mostly got the basics of Docker. I understand how to use it to run and manage containers, but I feel like there’s still a lot more to explore. I also got more comfortable with C, which helped me understand how things work at a lower level. Throughout the course, I was introduced to some operating system APIs, which gave me a better idea of how programs interact with the system. One of the topics I found challenging was CPU scheduling. I get the general concept, and I can follow examples, but when it comes to doing it on my own, I’m still not fully confident. Another tough topic was memory management. I kind of get it, but I definitely need more practice. Even though it was hard, it was interesting to learn how the system handles memory behind the scenes. Things felt a lot more relaxed once we got to concurrency. I think that’s because I had already learned ...

CST 334 - Week 7

  This week was about storage persistence, including I/O devices, hard disk drives, and RAID systems. These topics were not new to me because I have already taken hardware and digital forensics courses, where the basics were covered. However, this course focuses more on development using code and math, while also explaining files and directories for the same purpose. The interesting part of this week’s topics was that the article chosen by my team for the group project was related to read and write operations on storage devices for energy efficiency. In this case, the concepts from the course helped me better understand the article, especially since it did not explain some of the technical terms.

CST 334 - Week 6

This week focused on two main topics. Compared to previous weeks, the workload was lighter, which made it easier to focus and understand the material without feeling overwhelmed or frustrated. One of the key topics was  semaphores , which are synchronization tools used to manage access to shared resources in multi-threaded environments. They help prevent  race conditions , which occur when multiple threads try to modify data at the same time. As I understand it, semaphores are similar to locks, but they include a counter that keeps track of how many threads can access a resource. When the counter reaches zero, other threads must wait until a resource becomes available again. The module also introduced two well-known problems that can be solved using semaphores. The first is the  producer/consumer problem , where the producer must wait if the buffer is full, and the consumer must wait if the buffer is empty. This ensures that the producer does not overwrite data and the co...

CST 334 - Week 5

  This week was about concurrency and threads, as well as locks—a topic related to threads. I was already familiar with threads, so this topic wasn’t too hard to understand, but I still have trouble focusing on and understanding the book. Continuing with the topic, threads are compared to processes in the book, but they are not the same. One key difference mentioned is that threads share memory, which allows a program to perform multiple actions at the same time, without waiting for one action to finish before starting another. The book explains that this makes programs more efficient. However, it also introduces potential problems—such as when multiple threads try to update the same resource at the same time. This can cause the program to behave unpredictably or produce incorrect results. To help avoid this kind of issue, lock implementations are used. A lock ensures that only one thread runs at a time when accessing a shared resource. The thread holds the lock while it’s running,...

CST 334 - Week 4

 This week, the main thing I learned was about paging, which is a different method from segmentation for virtualizing memory. In segmentation, partitions have different sizes, which can lead to wasted space. Paging solves this problem by using fixed-size blocks called pages, ensuring that memory is divided evenly.   Even though this is an advantage, paging can also make memory access slower and requires more memory for page tables. Additionally, it suffers from internal fragmentation. However, one benefit is that it does not require continuous memory allocation.

CST 334 - Week 3

This week was about memory management, which, honestly, was hard to understand. I had some ideas about how it works from when I learned programming and also in the architecture course, but that was some time ago. Now, after reading it again, it’s more confusing than helpful. However, what I understand about how memory works is that it’s used to store data, code, instructions, and more. The main memory is one type of memory that allows fast data processing, but it has a limited size and is cleared when it's no longer needed. The purpose of memory management is to allocate the required memory for each program to run and also to protect the memory of other processes while this happens. For programs, memory is stored in locations to be read and to hold variables and arguments. In some programming languages, memory allocation is done automatically, but in others like C, you can manually allocate memory and also free it when it's no longer necessary. According to the book, allocation...

CST 334 - Week 2

This week in the book, I learned about processes, which are instructions waiting for an action. For the CPU to function efficiently, it’s beneficial to have more than one process working concurrently, such as running a web browser and a video game at the same time. To enable this, the operating system uses virtualization to simulate multiple CPUs. In the process API, UNIX systems contain methods like fork(), exec(), and wait() that allow process management. The fork() system call creates a new process for the parent process. The wait() system call allows the processes to be scheduled to run in order. As for the exec() system call, I didn’t fully understand it from the book, but according to Google, it replaces the current process with a new one. I’m still not sure how this works. Then comes my new nightmare: CPU scheduling, specifically the metric Response Time. The challenge for me was understanding where to find the T first run in order to calculate the T response. I’m still having ...