Posts

Showing posts from April, 2025

CST 334 - Week 8

I guess this journal is more about what I learned over the whole course rather than just this past week. Looking back, I’d say I mostly got the basics of Docker. I understand how to use it to run and manage containers, but I feel like there’s still a lot more to explore. I also got more comfortable with C, which helped me understand how things work at a lower level. Throughout the course, I was introduced to some operating system APIs, which gave me a better idea of how programs interact with the system. One of the topics I found challenging was CPU scheduling. I get the general concept, and I can follow examples, but when it comes to doing it on my own, I’m still not fully confident. Another tough topic was memory management. I kind of get it, but I definitely need more practice. Even though it was hard, it was interesting to learn how the system handles memory behind the scenes. Things felt a lot more relaxed once we got to concurrency. I think that’s because I had already learned ...

CST 334 - Week 7

  This week was about storage persistence, including I/O devices, hard disk drives, and RAID systems. These topics were not new to me because I have already taken hardware and digital forensics courses, where the basics were covered. However, this course focuses more on development using code and math, while also explaining files and directories for the same purpose. The interesting part of this week’s topics was that the article chosen by my team for the group project was related to read and write operations on storage devices for energy efficiency. In this case, the concepts from the course helped me better understand the article, especially since it did not explain some of the technical terms.

CST 334 - Week 6

This week focused on two main topics. Compared to previous weeks, the workload was lighter, which made it easier to focus and understand the material without feeling overwhelmed or frustrated. One of the key topics was  semaphores , which are synchronization tools used to manage access to shared resources in multi-threaded environments. They help prevent  race conditions , which occur when multiple threads try to modify data at the same time. As I understand it, semaphores are similar to locks, but they include a counter that keeps track of how many threads can access a resource. When the counter reaches zero, other threads must wait until a resource becomes available again. The module also introduced two well-known problems that can be solved using semaphores. The first is the  producer/consumer problem , where the producer must wait if the buffer is full, and the consumer must wait if the buffer is empty. This ensures that the producer does not overwrite data and the co...

CST 334 - Week 5

  This week was about concurrency and threads, as well as locks—a topic related to threads. I was already familiar with threads, so this topic wasn’t too hard to understand, but I still have trouble focusing on and understanding the book. Continuing with the topic, threads are compared to processes in the book, but they are not the same. One key difference mentioned is that threads share memory, which allows a program to perform multiple actions at the same time, without waiting for one action to finish before starting another. The book explains that this makes programs more efficient. However, it also introduces potential problems—such as when multiple threads try to update the same resource at the same time. This can cause the program to behave unpredictably or produce incorrect results. To help avoid this kind of issue, lock implementations are used. A lock ensures that only one thread runs at a time when accessing a shared resource. The thread holds the lock while it’s running,...

CST 334 - Week 4

 This week, the main thing I learned was about paging, which is a different method from segmentation for virtualizing memory. In segmentation, partitions have different sizes, which can lead to wasted space. Paging solves this problem by using fixed-size blocks called pages, ensuring that memory is divided evenly.   Even though this is an advantage, paging can also make memory access slower and requires more memory for page tables. Additionally, it suffers from internal fragmentation. However, one benefit is that it does not require continuous memory allocation.