1
0
mirror of https://github.com/nxshock/zkv.git synced 2024-11-27 11:21:02 +05:00
Go to file
2022-12-12 21:01:00 +05:00
.gitattributes Initial commit 2022-02-16 16:08:20 +05:00
.gitignore Initial commit 2022-02-16 16:08:20 +05:00
defaults.go Improve store read speed 2022-12-11 21:00:36 +05:00
errors.go New version 2022-12-02 20:32:09 +05:00
go.mod New version 2022-12-02 20:32:09 +05:00
go.sum New version 2022-12-02 20:32:09 +05:00
LICENSE Initial commit 2022-02-16 16:08:20 +05:00
options.go Improve store read speed 2022-12-11 21:00:36 +05:00
README.md Add notes about current store state 2022-12-11 21:33:51 +05:00
record.go Add Backup() method 2022-12-05 21:26:54 +05:00
utils.go Make constant key length 2022-12-02 21:37:34 +05:00
zkv_test.go Improve store read speed 2022-12-11 21:00:36 +05:00
zkv.go Improve store read speed 2022-12-11 21:00:36 +05:00

zkv

Simple key-value store for single-user applications.

Pros

  • Simple two file structure (data file and index file)
  • Internal Zstandard compression by klauspost/compress/zstd
  • Threadsafe operations through sync.RWMutex

Cons

  • Index stored in memory (map[key hash (28 bytes)]file offset (int64))
  • No transaction system
  • Index file is fully rewrited on every store commit
  • No way to recover disk space from deleted records
  • Write/Delete operations block Read and each other operations

Usage

Create or open existing file:

db, err := zkv.Open("path to file")

Data operations:

// Write data
err = db.Set(key, value) // key and value can be any of type

// Read data
var value ValueType
err = db.Get(key, &value)

// Delete data
err = db.Delete(key)

Other methods:

// Flush data to disk
err = db.Flush()

// Backup data to another file
err = db.Backup("new/file/path")

Store options

type Options struct {
	// Maximum number of concurrent reads
	MaxParallelReads int

	// Compression level
	CompressionLevel zstd.EncoderLevel

	// Memory write buffer size in bytes
	MemoryBufferSize int

	// Disk write buffer size in bytes
	DiskBufferSize int
}

File structure

Record is encoding/gob structure:

Field Description Size
Type Record type uint8
KeyHash Key hash 28 bytes
ValueBytes Value gob-encoded bytes variable

File is log stuctured list of commands:

Field Description Size
Length Record body bytes length int64
Body Gob-encoded record variable

Index file is simple gob-encoded map:

map[string]struct {
	BlockOffset  int64
	RecordOffset int64
}

where map key is data key hash and value - data offset in data file.

Resource consumption

Store requirements:

  • around 300 Mb of RAM per 1 million of keys
  • around 34 Mb of disk space for index file per 1 million of keys

TODO

  • Add recovery previous state of store file on write error
  • Add method for index rebuild