Infinite


  • Home

  • Categories

  • Archives

  • Tags

  • Sitemap

  • RSS

  • Search

【TP】15 Address Translation

Posted on 2017-09-17 | Post modified 2019-11-16 | In OS | Visitors

a general mechanism: limited direct execution (LDE):
the idea is simple, for the most part, let program run directly on hardware.

let’s see one example

1
2
3
128: movl 0x0(%ebx), %eax     ;load 0+ebx into eax
132: addl $0x03, %eax ;add 3 to eax register
135: movl %eax, 0x0(%ebx) ;store eax back to mem

When these instructions run, from the perspective of the process, the following memory accesses take place.

  • Fetch instruction at address 128
  • Execute this instruction (load from address 15 KB)
  • Fetch instruction at address 132
  • Execute this instruction (no memory reference)
  • Fetch the instruction at address 135
  • Execute this instruction (store to address 15 KB)
    Read more »

【TP】13 The Abstraction Address Spaces

Posted on 2017-09-17 | Post modified 2019-11-16 | In OS | Visitors

Early Systems

Early machines didn’t provide much of an abstraction to users.

The OS was a set of routines (a library, really) that sat in memory (starting at physical address 0 in this example), and there would be one running program (a process) that currently sat in physical memory (starting at physical address 64k in this example) and used the rest of memory.

Multiprogramming and Time Sharing

After a time, because machines were expensive, people began to share machines more effectively. Thus the era of multiprogramming was born [DV66], in which multiple processes were ready to run at a given time, and the OS would switch between them, for example when one decided to perform an I/O. Doing so increased the effective utilization of the CPU. Such increases in efficiency were particularly important in those days where each machine cost hundreds of thousands or even millions of dollars.

Soon enough, however, people began demanding more of machines, and the era of time sharing was born.

Read more »

【TP】43 Log-structured File System

Posted on 2017-09-11 | Post modified 2019-11-16 | In OS | Visitors

Background:

  • System merories are growing
  • There is a large gap between random I/O performance and sequential I/O performance
  • Existing file systems perform poorly on many common workloads
  • File systems are not RAID-aware

LFS

  1. first buffer all updates (inlcuding metadata) in an in memory segment,
  2. when segment is full, it is written to disk in one long, sequential transfer to an unused part of disk

LFS never overwrite existing data, but rather always writes segments to free locations. Because segments are large, the disk is used efficiently, and performance of the file system approaches its zenith.

Read more »

【TP】42 FSCK-and-journaling

Posted on 2017-09-09 | Post modified 2019-11-16 | In OS | Visitors

old approach: fsck(file system checker)
new approach: journaling(also known as write-ahead-logging)

A Detailed Example


If you look at the structures in the picture, you can see that a single inode is allocated (inode number 2), which is marked in the inode bitmap, and a single allocated data block (data block 4), also marked in the data bitmap. The inode is denoted I[v1],

let’s see the simple inode inside:

Read more »

【TP】41 Fast File System (FFS)

Posted on 2017-09-09 | Post modified 2019-11-16 | In OS | Visitors

Old Unix file system: Simple and block size is small( 512B )

1
Problems: Performance terrible. Because old system treats disk like a random-access memory. delivering 2% overall disk bandwith:
* data spread all over the place, expensive positioning cose (data block could be very far away from its inode)
* file system getting fragmented
2

E gets spread across the disk, and as a result, when accessing E, you don’t get peak (sequential) performance from the disk. Rather, you first read E1 and E2, then seek, then read E3 and E4. This fragmentation problem happened all the time in the old UNIX file system, and it hurt performance.

small block size cause transferring data from disk inefficient.

Read more »

LevelDb 详解

Posted on 2017-08-23 | Post modified 2019-11-16 | In Technology | Visitors

说起LevelDb也许您不清楚,但是如果作为IT工程师,不知道下面两位大神级别的工程师,那您的领导估计会Hold不住了:Jeff Dean和Sanjay Ghemawat。这两位是Google公司重量级的工程师,为数甚少的Google Fellow之二。

Jeff Dean其人:http://research.google.com/people/jeff/index.html,Google大规模分布式平台Bigtable和MapReduce主要设计和实现者。

Sanjay Ghemawat其人:http://research.google.com/people/sanjay/index.html,Google大规模分布式平台GFS,Bigtable和MapReduce主要设计和实现工程师。

LevelDb就是这两位大神级别的工程师发起的开源项目,简而言之,LevelDb是能够处理十亿级别规模Key-Value型数据持久性存储的C++ 程序库。正像上面介绍的,这二位是Bigtable的设计和实现者,如果了解Bigtable的话,应该知道在这个影响深远的分布式存储系统中有两个核心的部分:Master Server和Tablet Server。其中Master Server做一些管理数据的存储以及分布式调度工作,实际的分布式数据存储以及读写操作是由Tablet Server完成的,而LevelDb则可以理解为一个简化版的Tablet Server。

Read more »
<1…78910>
XS Zhao

XS Zhao

60 posts
12 categories
36 tags
GitHub Facebook Instagram E-Mail
Recent Posts
  1. Ethereum Project Infrastructure
  2. Dapp: Lottery Contract
  3. Write ethereum test code
  4. Review: bLSM:* A General Purpose Log Structured Merge Tree
  5. Review: ElasticBF: Fine-grained and Elastic Bloom Filter Towards Efficient Read for LSM-tree-based KV Stores
© 2017 - 2020 XS Zhao
Powered by Hexo
Theme - NexT.Muse
0%