libStatGen Software
1
|
There are a pair of related data structures in the operating system, and also a few simple algorithms that explain why your processes are waiting forever. More...
#include <MemoryMap.h>
Public Member Functions | |
void | debug_print () |
void | constructor_clear () |
void | destructor_clear () |
virtual bool | allocate () |
virtual bool | open (const char *file, int flags=O_RDONLY) |
open a previously created mapped vector | |
virtual bool | create (const char *file, size_t size) |
create the memory mapped file on disk | |
virtual bool | create (size_t size) |
store in allocated memory (malloc), not mmap: | |
bool | close () |
void | test () |
size_t | length () |
char | operator[] (unsigned int index) |
int | prefetch () |
void | useMemoryMap (bool flag=true) |
Public Attributes | |
void * | data |
There are a pair of related data structures in the operating system, and also a few simple algorithms that explain why your processes are waiting forever.
The symptom you have is that they are getting little or no CPU time, as shown in the command 'top'. The machine will appear to have available CPU time (look at the Cpu(s): parameter - if less than 100%, you have available CPU). The real key, however, is to look at the 'top' column with the label 'S' - that is the status of the process, and crucial to understanding what is going on.
In your instance, the 'S' column for your karma jobs is 'D', which means it is waiting for data. This is because the process is doing something that is waiting for the filesystem to return data to it. Usually, this is because of a C call like read() or write(), but it also happens in large processes where memory was copied to disk and re-used for other purposes (this is called paging).
So, a bit of background on the operating system... there is a CPU secheduler that takes a list of waiting processes, and picks one to run - if the job is waiting for the disk, there is no point in picking it to run, since it is blocked, waiting for the disk to return data. The scheduler marks the process with 'D' and moves on to the next process to schedule.
In terms of data structures that we care about for this example, there are two that we care about. First is a linear list of disk buffers that are stored in RAM and controlled by the operating system. This is usually called the disk buffer pool. Usually, when a program asks for data from the disk, this list can be scanned quickly to see if the data is already in RAM - if so, no disk operation needs to take place.
Now in the case of the normal Unix read() and write() calls, when the operating system is done finding the page, it copies the data into a buffer to be used by the process that requested it (in the case of a read() - a write() is the opposite). This copy operation is slow and inefficient, but gets the job done.
So overall, you gain some efficiency in a large memory system by having this disk buffer pool data structure, since you aren't re-reading the disk over and over to get the same data that you already have in RAM. However, it is less efficient than it might be because of the extra buffer copying.
Now we come to memory mapped files, and karma. The underlying system call of interest to us is mmap(), and is in MemoryMap.cpp. What it does and how it works are important to understanding the benefits of it, and frankly, most people don't care about it because it is seemingly complex.
Two things are important to know: firstly, there is a data structure in the CPU called the page table, which is mostly contained in the CPU hardware itself. All memory accesses for normal user processes like karma go through this hardware page table. Secondly, it is very fast for the operating system to put together a page table that 'connects' a bunch of memory locations in your user programs address space to the disk buffer pool pages.
The combination of those two facts mean that you can implement a 'zero copy' approach to reading data, which means that the data that is in the disk buffer pool is directly readable by the program without the operating system ever having to actually copy the data, like it does for read() or write().
So the benefit of mmap() is that when the underlying disk pages are already in the disk buffer pool, a hardware data structure gets built, then the program returns, and the data is available at full processor speed with no intervening copy of the data, or waiting for disk or anything else. It is as near to instantaneous as you can possibly get. This works whether it is 100 bytes or 100 gigabytes.
So, the last part of the puzzle is why your program winds up in 'D' (data wait), and what to do about it.
The disk buffer pool is a linear list of blocks ordered by the time and date of access. A process runs every once in awhile to take the oldest of those pages, and free them, during which it also has to update the hardware page tables of any processes referencing them.
So on wonderland, most file access (wget, copy, md5sum, anything else) is constantly putting new fresh pages at the front of the list, and karma index files, having been opened awhile ago, are prime candidates for being paged out. The reason they get paged out as far as I know is that in any given second of execution, nowhere near the entire index is getting accessed... so at some point, at least one page gets sent back to disk (well, flushed from RAM). Once that happens, a cascading effect happens, where the longer it waits, the older the other pages get, then the more that get reclaimed, and the slower it gets, until karma is at a standstill, waiting for pages to be brought back into RAM.
Now in an ideal world, karma would rapidly recover, and it can... sometimes. The problem is that your karma job is accessing data all over that index, and it is essentially looking like a pure random I/O to the underlying filesystem. There is about a 10 to 1 performance difference between accessing the disk sequentially as compared to randomly.
So to make karma work better, the first thing I do when starting karma is force it to read all of the disk pages in order. This causes the entire index to be forced into memory in order, so it is forcing sequential reads, which is the best case possible. There are problems, for example if three karma jobs start at once, the disk I/O is no longer as purely sequential as we would like. Also, if the filesystem is busy taking care of other programs, even if karma thinks it is forcing sequential I/O, the net result looks more random. This happens when the system is starting to break down (thrashing) and it will certainly stall, or look very very slow, or crash.
The upshot of all of this is that when a single reference is shared, it is more likely that all the pages will be in the disk buffer pool to begin with, and thereby reduce startup time to nearly zero. It is also the ideal situation in terms of sharing the same reference among say 24 copies of karma on wonderland - the only cost is the hardware page table that gets set up to point to all of the disk buffers.
As I mentioned a paragraph back, the pages can still get swapped out, even with dozens of karma jobs running. A workaround I created is a program in utilities called mapfile - it simply repeatedly accesses the data in sequential order to help ensure that all of the pages are at the head of the disk buffer pool, and therefore less likely to get swapped out.
The benefit of such a program (mapfile) is greater on wonderland, where a lot of processes are competing for memory and disk buffers.
Definition at line 155 of file MemoryMap.h.
bool MemoryMap::create | ( | const char * | file, |
size_t | size | ||
) | [virtual] |
create the memory mapped file on disk
a file will be created on disk with the header filled in. The caller must now populate elements using (*this).set(index, value).
Definition at line 243 of file MemoryMap.cpp.
References open().
Referenced by create().
{ if (file==NULL) { data = calloc(size, 1); return(data==NULL); } const char * message = "MemoryMap::create - problem creating file %s"; #ifdef __WIN32__ file_handle = CreateFile(file, GENERIC_READ | GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, CREATE_ALWAYS, FILE_ATTRIBUTE_NORMAL, NULL); if (file_handle == INVALID_HANDLE_VALUE) { fprintf(stderr, message, file); constructor_clear(); return true; } SetFilePointer(file_handle, size - 1, NULL, FILE_BEGIN); char dummy = 0; DWORD check = 0; WriteFile(file_handle, &dummy, 1, &check, NULL); if (check != 0) { CloseHandle(file_handle); DeleteFile(file); fprintf(stderr, message, file); constructor_clear(); return true; } CloseHandle(file_handle); open(file, O_RDWR); #else fd = ::open(file, O_RDWR|O_CREAT|O_TRUNC, 0666); if(fd == -1) { fprintf(stderr, message, file); constructor_clear(); return true; } lseek(fd, (off_t) size - 1, SEEK_SET); char dummy = 0; if(write(fd, &dummy, 1)!=1) { fprintf(stderr, message, file); constructor_clear(); return true; } data = ::mmap(NULL, size, PROT_READ|PROT_WRITE, MAP_SHARED, fd, offset); if (data == MAP_FAILED) { ::close(fd); unlink(file); fprintf(stderr, message, file); constructor_clear(); return true; } mapped_length = total_length = size; #endif return false; }
bool MemoryMap::create | ( | size_t | size | ) | [virtual] |
store in allocated memory (malloc), not mmap:
This is for code that needs to more flexibly the case when an mmap() file _might_ be available, but if it is not, we want to load it as a convenience to the user. GenomeSequence::populateDBSNP does exactly this.
Definition at line 319 of file MemoryMap.cpp.
References create().
{ return create(NULL, size); }
bool MemoryMap::open | ( | const char * | file, |
int | flags = O_RDONLY |
||
) | [virtual] |
open a previously created mapped vector
useMemoryMapFlag will determine whether it uses mmap() or malloc()/read() to populate the memory
Reimplemented in MemoryMapArray< elementT, indexT, cookieVal, versionVal, accessorFunc, setterFunc, elementCount2BytesFunc, arrayHeaderClass >, and GenomeSequence.
Definition at line 156 of file MemoryMap.cpp.
Referenced by create().
{ const char * message = "MemoryMap::open - problem opening file %s"; #if defined(_WIN32) file_handle = CreateFile(file, (flags==O_RDONLY) ? GENERIC_READ : (GENERIC_READ | GENERIC_WRITE), FILE_SHARE_READ | FILE_SHARE_WRITE, // subsequent opens may either read or write NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); if(file_handle == INVALID_HANDLE_VALUE) { fprintf(stderr, message, file); constructor_clear(); return true; } LARGE_INTEGER file_size = {0}; ::GetFileSizeEx(file_handle, &file_size); mapped_length = total_length = file_size.QuadPart; #else struct stat buf; fd = ::open(file, flags); if ((fd==-1) || (fstat(fd, &buf) != 0)) { fprintf(stderr, message, file); constructor_clear(); return true; } mapped_length = total_length = buf.st_size; #endif if(!useMemoryMapFlag) { return allocate(); } #if defined(_WIN32) assert(offset == 0); map_handle = CreateFileMapping(file_handle, NULL, (flags==O_RDONLY) ? PAGE_READONLY : PAGE_READWRITE, file_size.HighPart, // upper 32 bits of map size file_size.LowPart, // lower 32 bits of map size NULL); if(map_handle == NULL) { ::CloseHandle(file_handle); fprintf(stderr, message, file); constructor_clear(); return true; } data = MapViewOfFile(map_handle, (flags == O_RDONLY) ? FILE_MAP_READ : FILE_MAP_ALL_ACCESS, 0, 0, mapped_length); if (data == NULL) { CloseHandle(map_handle); CloseHandle(file_handle); fprintf(stderr, message, file); constructor_clear(); return true; } #else data = ::mmap(NULL, mapped_length, (flags == O_RDONLY) ? PROT_READ : PROT_READ | PROT_WRITE, MAP_SHARED, fd, offset); if (data == MAP_FAILED) { ::close(fd); fprintf(stderr, message, file); constructor_clear(); return true; } #endif return false; }