Chicago bitcoin
15 commentsIntel xeon bitcoin mining
Swarm is a distributed storage platform and content distribution service, a native base layer service of the ethereum web 3 stack. From an economic point of view, it allows participants to efficiently pool their storage and bandwidth resources in order to provide the aforementioned services to all participants. The objective is to offer a peer-to-peer storage and serving solution that is DDOS-resistant, zero-downtime, fault-tolerant and censorship-resistant as well as self-sustaining due to a built-in incentive system which uses peer-to-peer accounting and allows trading resources for payment.
Swarm is designed to deeply integrate with the devp2p multiprotocol network layer of Ethereum as well as with the Ethereum blockchain for domain name resolution, service payments and content availability insurance the latter is to be implemented in POC 0. Swarm client is part of the Ethereum stack, the reference implementation is written in golang and found under the go-ethereum repository.
Currently at POC proof of concept version 0. Swarm defines the bzz subprotocol running on the ethereum devp2p network. The bzz subprotocol is in flux, the specification of the wire protocol is considered stable only with POC 0. The swarm of Swarm is the collection of nodes of the devp2p network each of which run the bzz protocol on the same network id. Swarm nodes are also connected to an ethereum blockchain.
Nodes running the same network id are supposed to connect to the same blockchain. Such a swarm network is identified by its network id which is an arbitrary integer.
Swarm allows for upload and disappear which means that any node can just upload content to the swarm and then is allowed to go offline. Uploaded content is not guaranteed to persist until storage insurance is implemented expected in POC 0. All participating nodes should consider voluntary service with no formal obligation whatsoever and should be expected to delete content at their will.
Therefore, users should under no circumstances regard swarm as safe storage until the incentive system is functional. Upload of sensitive and private data is highly discouraged as there is no way to undo an upload.
Users should refrain from uploading unencrypted sensitive data, in other words. In this guide, content is understood very broadly in a technical sense denoting any blob of data. Swarm defines a specific identifier for a piece of content. This identifier serves as the retrieval address for the content.
Identifiers need to be. The choice of identifier in swarm is the hierarchical swarm hash described in Swarm hash. The properties above let us view the identifiers as addresses at which content is expected to be found.
Since hashes can be assumed to be collision free, they are bound to one specific version of a content, i. Hash addressing therefore is immutable in the strong sense that you cannot even express mutable content: Users, however, usually use some discovery and or semantic access to data, which is implemented by the ethereum name service ENS.
The ENS enables content retrieval based on mnemonic or branded names, much like the DNS of the world wide web, but without servers. Swarm nodes participating in the network also have their own base address also called bzzkey which is derived as the keccak bit sha3 hash of an ethereum address, the so called swarm base account of the node.
These node addresses define a location in the same address space as the data. When content is uploaded to swarm it is chopped up into pieces called chunks. Each chunk is accessed at the address defined by its swarm hash. The hashes of data chunks themselves are packaged into a chunk which in turn has its own hash. In this way the content gets mapped to a chunk tree. This hierarchical swarm hash construct allows for merkle proofs for chunks within a piece of content, thus providing swarm with integrity protected random access into large files allowing for instance skipping safely in a streaming video.
The current version of swarm implements a strictly content addressed distributed hash table DHT. Note that although it is part of the protocol, we cannot have any sort of guarantee that it will be preserved.
This assumption is guaranteed with a special network topology called kademlia , which offers very low constant time for lookups logarithmic to the network size. Once data is uploaded there is no way you can initiate her to revoke it. Nodes cache content that they pass on at retrieval, resulting in an auto scaling elastic cloud: Caching also results in a maximum resource utilisation in as much as nodes will fill their dedicated storage space with data passing through them.
If capacity is reached, least accessed chunks are purged by a garbage collection process. As a consequence, unpopular content will end up getting deleted. Storage insurance to be implemented in POC 0. Swarm content access is centred around the notion of a manifest. A manifest file describes a document collection, e. Manifests specify paths and corresponding content hashes allowing for url based content retrieval.
Manifests can therefore define a routing table for static assets including dynamic content using for instance static javascript. This offers the functionality of virtual hosting , storing entire directories or web 3 sites, similar to www but without servers. You can read more about these components in Architecture. The status of swarm is proof of concept vanilla prototype tested on a toy network. This version is POC 0.
Swarm discussions also on the Ethereum subreddit: Issues are tracked on github and github only. Swarm related issues and PRs are labeled with swarm: Swarm roadmap and tentative plan for features and POC series are found on the wiki: Source code is at https: Example dapps are at https: This document source https: This document provides you with information on: These objectives entail the following design requirements: Note Uploaded content is not guaranteed to persist until storage insurance is implemented expected in POC 0.
Note Swarm POC 0. Users should refrain from uploading unencrypted sensitive data, in other words no valuable personal content no illegal, controversial or unethical content. Use with extreme care. Pull requests should by default commit on the master branch edge. You can also find the first 2 ethersphere orange papers there. Read the Docs v: