Scalability - The Best Images, Videos & Discussions About ...

Why i’m bullish on Zilliqa (long read)

Edit: TL;DR added in the comments
 
Hey all, I've been researching coins since 2017 and have gone through 100s of them in the last 3 years. I got introduced to blockchain via Bitcoin of course, analyzed Ethereum thereafter and from that moment I have a keen interest in smart contact platforms. I’m passionate about Ethereum but I find Zilliqa to have a better risk-reward ratio. Especially because Zilliqa has found an elegant balance between being secure, decentralized and scalable in my opinion.
 
Below I post my analysis of why from all the coins I went through I’m most bullish on Zilliqa (yes I went through Tezos, EOS, NEO, VeChain, Harmony, Algorand, Cardano etc.). Note that this is not investment advice and although it's a thorough analysis there is obviously some bias involved. Looking forward to what you all think!
 
Fun fact: the name Zilliqa is a play on ‘silica’ silicon dioxide which means “Silicon for the high-throughput consensus computer.”
 
This post is divided into (i) Technology, (ii) Business & Partnerships, and (iii) Marketing & Community. I’ve tried to make the technology part readable for a broad audience. If you’ve ever tried understanding the inner workings of Bitcoin and Ethereum you should be able to grasp most parts. Otherwise, just skim through and once you are zoning out head to the next part.
 
Technology and some more:
 
Introduction
 
The technology is one of the main reasons why I’m so bullish on Zilliqa. First thing you see on their website is: “Zilliqa is a high-performance, high-security blockchain platform for enterprises and next-generation applications.” These are some bold statements.
 
Before we deep dive into the technology let’s take a step back in time first as they have quite the history. The initial research paper from which Zilliqa originated dates back to August 2016: Elastico: A Secure Sharding Protocol For Open Blockchains where Loi Luu (Kyber Network) is one of the co-authors. Other ideas that led to the development of what Zilliqa has become today are: Bitcoin-NG, collective signing CoSi, ByzCoin and Omniledger.
 
The technical white paper was made public in August 2017 and since then they have achieved everything stated in the white paper and also created their own open source intermediate level smart contract language called Scilla (functional programming language similar to OCaml) too.
 
Mainnet is live since the end of January 2019 with daily transaction rates growing continuously. About a week ago mainnet reached 5 million transactions, 500.000+ addresses in total along with 2400 nodes keeping the network decentralized and secure. Circulating supply is nearing 11 billion and currently only mining rewards are left. The maximum supply is 21 billion with annual inflation being 7.13% currently and will only decrease with time.
 
Zilliqa realized early on that the usage of public cryptocurrencies and smart contracts were increasing but decentralized, secure, and scalable alternatives were lacking in the crypto space. They proposed to apply sharding onto a public smart contract blockchain where the transaction rate increases almost linear with the increase in the amount of nodes. More nodes = higher transaction throughput and increased decentralization. Sharding comes in many forms and Zilliqa uses network-, transaction- and computational sharding. Network sharding opens up the possibility of using transaction- and computational sharding on top. Zilliqa does not use state sharding for now. We’ll come back to this later.
 
Before we continue dissecting how Zilliqa achieves such from a technological standpoint it’s good to keep in mind that a blockchain being decentralised and secure and scalable is still one of the main hurdles in allowing widespread usage of decentralised networks. In my opinion this needs to be solved first before blockchains can get to the point where they can create and add large scale value. So I invite you to read the next section to grasp the underlying fundamentals. Because after all these premises need to be true otherwise there isn’t a fundamental case to be bullish on Zilliqa, right?
 
Down the rabbit hole
 
How have they achieved this? Let’s define the basics first: key players on Zilliqa are the users and the miners. A user is anybody who uses the blockchain to transfer funds or run smart contracts. Miners are the (shard) nodes in the network who run the consensus protocol and get rewarded for their service in Zillings (ZIL). The mining network is divided into several smaller networks called shards, which is also referred to as ‘network sharding’. Miners subsequently are randomly assigned to a shard by another set of miners called DS (Directory Service) nodes. The regular shards process transactions and the outputs of these shards are eventually combined by the DS shard as they reach consensus on the final state. More on how these DS shards reach consensus (via pBFT) will be explained later on.
 
The Zilliqa network produces two types of blocks: DS blocks and Tx blocks. One DS Block consists of 100 Tx Blocks. And as previously mentioned there are two types of nodes concerned with reaching consensus: shard nodes and DS nodes. Becoming a shard node or DS node is being defined by the result of a PoW cycle (Ethash) at the beginning of the DS Block. All candidate mining nodes compete with each other and run the PoW (Proof-of-Work) cycle for 60 seconds and the submissions achieving the highest difficulty will be allowed on the network. And to put it in perspective: the average difficulty for one DS node is ~ 2 Th/s equaling 2.000.000 Mh/s or 55 thousand+ GeForce GTX 1070 / 8 GB GPUs at 35.4 Mh/s. Each DS Block 10 new DS nodes are allowed. And a shard node needs to provide around 8.53 GH/s currently (around 240 GTX 1070s). Dual mining ETH/ETC and ZIL is possible and can be done via mining software such as Phoenix and Claymore. There are pools and if you have large amounts of hashing power (Ethash) available you could mine solo.
 
The PoW cycle of 60 seconds is a peak performance and acts as an entry ticket to the network. The entry ticket is called a sybil resistance mechanism and makes it incredibly hard for adversaries to spawn lots of identities and manipulate the network with these identities. And after every 100 Tx Blocks which corresponds to roughly 1,5 hour this PoW process repeats. In between these 1,5 hour, no PoW needs to be done meaning Zilliqa’s energy consumption to keep the network secure is low. For more detailed information on how mining works click here.
Okay, hats off to you. You have made it this far. Before we go any deeper down the rabbit hole we first must understand why Zilliqa goes through all of the above technicalities and understand a bit more what a blockchain on a more fundamental level is. Because the core of Zilliqa’s consensus protocol relies on the usage of pBFT (practical Byzantine Fault Tolerance) we need to know more about state machines and their function. Navigate to Viewblock, a Zilliqa block explorer, and just come back to this article. We will use this site to navigate through a few concepts.
 
We have established that Zilliqa is a public and distributed blockchain. Meaning that everyone with an internet connection can send ZILs, trigger smart contracts, etc. and there is no central authority who fully controls the network. Zilliqa and other public and distributed blockchains (like Bitcoin and Ethereum) can also be defined as state machines.
 
Taking the liberty of paraphrasing examples and definitions given by Samuel Brooks’ medium article, he describes the definition of a blockchain (like Zilliqa) as: “A peer-to-peer, append-only datastore that uses consensus to synchronize cryptographically-secure data”.
 
Next, he states that: "blockchains are fundamentally systems for managing valid state transitions”. For some more context, I recommend reading the whole medium article to get a better grasp of the definitions and understanding of state machines. Nevertheless, let’s try to simplify and compile it into a single paragraph. Take traffic lights as an example: all its states (red, amber, and green) are predefined, all possible outcomes are known and it doesn’t matter if you encounter the traffic light today or tomorrow. It will still behave the same. Managing the states of a traffic light can be done by triggering a sensor on the road or pushing a button resulting in one traffic lights’ state going from green to red (via amber) and another light from red to green.
 
With public blockchains like Zilliqa, this isn’t so straightforward and simple. It started with block #1 almost 1,5 years ago and every 45 seconds or so a new block linked to the previous block is being added. Resulting in a chain of blocks with transactions in it that everyone can verify from block #1 to the current #647.000+ block. The state is ever changing and the states it can find itself in are infinite. And while the traffic light might work together in tandem with various other traffic lights, it’s rather insignificant comparing it to a public blockchain. Because Zilliqa consists of 2400 nodes who need to work together to achieve consensus on what the latest valid state is while some of these nodes may have latency or broadcast issues, drop offline or are deliberately trying to attack the network, etc.
 
Now go back to the Viewblock page take a look at the amount of transaction, addresses, block and DS height and then hit refresh. Obviously as expected you see new incremented values on one or all parameters. And how did the Zilliqa blockchain manage to transition from a previous valid state to the latest valid state? By using pBFT to reach consensus on the latest valid state.
 
After having obtained the entry ticket, miners execute pBFT to reach consensus on the ever-changing state of the blockchain. pBFT requires a series of network communication between nodes, and as such there is no GPU involved (but CPU). Resulting in the total energy consumed to keep the blockchain secure, decentralized and scalable being low.
 
pBFT stands for practical Byzantine Fault Tolerance and is an optimization on the Byzantine Fault Tolerant algorithm. To quote Blockonomi: “In the context of distributed systems, Byzantine Fault Tolerance is the ability of a distributed computer network to function as desired and correctly reach a sufficient consensus despite malicious components (nodes) of the system failing or propagating incorrect information to other peers.” Zilliqa is such a distributed computer network and depends on the honesty of the nodes (shard and DS) to reach consensus and to continuously update the state with the latest block. If pBFT is a new term for you I can highly recommend the Blockonomi article.
 
The idea of pBFT was introduced in 1999 - one of the authors even won a Turing award for it - and it is well researched and applied in various blockchains and distributed systems nowadays. If you want more advanced information than the Blockonomi link provides click here. And if you’re in between Blockonomi and the University of Singapore read the Zilliqa Design Story Part 2 dating from October 2017.
Quoting from the Zilliqa tech whitepaper: “pBFT relies upon a correct leader (which is randomly selected) to begin each phase and proceed when the sufficient majority exists. In case the leader is byzantine it can stall the entire consensus protocol. To address this challenge, pBFT offers a view change protocol to replace the byzantine leader with another one.”
 
pBFT can tolerate ⅓ of the nodes being dishonest (offline counts as Byzantine = dishonest) and the consensus protocol will function without stalling or hiccups. Once there are more than ⅓ of dishonest nodes but no more than ⅔ the network will be stalled and a view change will be triggered to elect a new DS leader. Only when more than ⅔ of the nodes are dishonest (66%) double-spend attacks become possible.
 
If the network stalls no transactions can be processed and one has to wait until a new honest leader has been elected. When the mainnet was just launched and in its early phases, view changes happened regularly. As of today the last stalling of the network - and view change being triggered - was at the end of October 2019.
 
Another benefit of using pBFT for consensus besides low energy is the immediate finality it provides. Once your transaction is included in a block and the block is added to the chain it’s done. Lastly, take a look at this article where three types of finality are being defined: probabilistic, absolute and economic finality. Zilliqa falls under the absolute finality (just like Tendermint for example). Although lengthy already we skipped through some of the inner workings from Zilliqa’s consensus: read the Zilliqa Design Story Part 3 and you will be close to having a complete picture on it. Enough about PoW, sybil resistance mechanism, pBFT, etc. Another thing we haven’t looked at yet is the amount of decentralization.
 
Decentralisation
 
Currently, there are four shards, each one of them consisting of 600 nodes. 1 shard with 600 so-called DS nodes (Directory Service - they need to achieve a higher difficulty than shard nodes) and 1800 shard nodes of which 250 are shard guards (centralized nodes controlled by the team). The amount of shard guards has been steadily declining from 1200 in January 2019 to 250 as of May 2020. On the Viewblock statistics, you can see that many of the nodes are being located in the US but those are only the (CPU parts of the) shard nodes who perform pBFT. There is no data from where the PoW sources are coming. And when the Zilliqa blockchain starts reaching its transaction capacity limit, a network upgrade needs to be executed to lift the current cap of maximum 2400 nodes to allow more nodes and formation of more shards which will allow to network to keep on scaling according to demand.
Besides shard nodes there are also seed nodes. The main role of seed nodes is to serve as direct access points (for end-users and clients) to the core Zilliqa network that validates transactions. Seed nodes consolidate transaction requests and forward these to the lookup nodes (another type of nodes) for distribution to the shards in the network. Seed nodes also maintain the entire transaction history and the global state of the blockchain which is needed to provide services such as block explorers. Seed nodes in the Zilliqa network are comparable to Infura on Ethereum.
 
The seed nodes were first only operated by Zilliqa themselves, exchanges and Viewblock. Operators of seed nodes like exchanges had no incentive to open them for the greater public. They were centralised at first. Decentralisation at the seed nodes level has been steadily rolled out since March 2020 ( Zilliqa Improvement Proposal 3 ). Currently the amount of seed nodes is being increased, they are public-facing and at the same time PoS is applied to incentivize seed node operators and make it possible for ZIL holders to stake and earn passive yields. Important distinction: seed nodes are not involved with consensus! That is still PoW as entry ticket and pBFT for the actual consensus.
 
5% of the block rewards are being assigned to seed nodes (from the beginning in 2019) and those are being used to pay out ZIL stakers. The 5% block rewards with an annual yield of 10.03% translate to roughly 610 MM ZILs in total that can be staked. Exchanges use the custodial variant of staking and wallets like Moonlet will use the non-custodial version (starting in Q3 2020). Staking is being done by sending ZILs to a smart contract created by Zilliqa and audited by Quantstamp.
 
With a high amount of DS; shard nodes and seed nodes becoming more decentralized too, Zilliqa qualifies for the label of decentralized in my opinion.
 
Smart contracts
 
Let me start by saying I’m not a developer and my programming skills are quite limited. So I‘m taking the ELI5 route (maybe 12) but if you are familiar with Javascript, Solidity or specifically OCaml please head straight to Scilla - read the docs to get a good initial grasp of how Zilliqa’s smart contract language Scilla works and if you ask yourself “why another programming language?” check this article. And if you want to play around with some sample contracts in an IDE click here. The faucet can be found here. And more information on architecture, dapp development and API can be found on the Developer Portal.
If you are more into listening and watching: check this recent webinar explaining Zilliqa and Scilla. Link is time-stamped so you’ll start right away with a platform introduction, roadmap 2020 and afterwards a proper Scilla introduction.
 
Generalized: programming languages can be divided into being ‘object-oriented’ or ‘functional’. Here is an ELI5 given by software development academy: * “all programs have two basic components, data – what the program knows – and behavior – what the program can do with that data. So object-oriented programming states that combining data and related behaviors in one place, is called “object”, which makes it easier to understand how a particular program works. On the other hand, functional programming argues that data and behavior are different things and should be separated to ensure their clarity.” *
 
Scilla is on the functional side and shares similarities with OCaml: OCaml is a general-purpose programming language with an emphasis on expressiveness and safety. It has an advanced type system that helps catch your mistakes without getting in your way. It's used in environments where a single mistake can cost millions and speed matters, is supported by an active community, and has a rich set of libraries and development tools. For all its power, OCaml is also pretty simple, which is one reason it's often used as a teaching language.
 
Scilla is blockchain agnostic, can be implemented onto other blockchains as well, is recognized by academics and won a so-called Distinguished Artifact Award award at the end of last year.
 
One of the reasons why the Zilliqa team decided to create their own programming language focused on preventing smart contract vulnerabilities is that adding logic on a blockchain, programming, means that you cannot afford to make mistakes. Otherwise, it could cost you. It’s all great and fun blockchains being immutable but updating your code because you found a bug isn’t the same as with a regular web application for example. And with smart contracts, it inherently involves cryptocurrencies in some form thus value.
 
Another difference with programming languages on a blockchain is gas. Every transaction you do on a smart contract platform like Zilliqa or Ethereum costs gas. With gas you basically pay for computational costs. Sending a ZIL from address A to address B costs 0.001 ZIL currently. Smart contracts are more complex, often involve various functions and require more gas (if gas is a new concept click here ).
 
So with Scilla, similar to Solidity, you need to make sure that “every function in your smart contract will run as expected without hitting gas limits. An improper resource analysis may lead to situations where funds may get stuck simply because a part of the smart contract code cannot be executed due to gas limits. Such constraints are not present in traditional software systems”. Scilla design story part 1
 
Some examples of smart contract issues you’d want to avoid are: leaking funds, ‘unexpected changes to critical state variables’ (example: someone other than you setting his or her address as the owner of the smart contract after creation) or simply killing a contract.
 
Scilla also allows for formal verification. Wikipedia to the rescue: In the context of hardware and software systems, formal verification is the act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics.
 
Formal verification can be helpful in proving the correctness of systems such as: cryptographic protocols, combinational circuits, digital circuits with internal memory, and software expressed as source code.
 
Scilla is being developed hand-in-hand with formalization of its semantics and its embedding into the Coq proof assistant — a state-of-the art tool for mechanized proofs about properties of programs.”
 
Simply put, with Scilla and accompanying tooling developers can be mathematically sure and proof that the smart contract they’ve written does what he or she intends it to do.
 
Smart contract on a sharded environment and state sharding
 
There is one more topic I’d like to touch on: smart contract execution in a sharded environment (and what is the effect of state sharding). This is a complex topic. I’m not able to explain it any easier than what is posted here. But I will try to compress the post into something easy to digest.
 
Earlier on we have established that Zilliqa can process transactions in parallel due to network sharding. This is where the linear scalability comes from. We can define simple transactions: a transaction from address A to B (Category 1), a transaction where a user interacts with one smart contract (Category 2) and the most complex ones where triggering a transaction results in multiple smart contracts being involved (Category 3). The shards are able to process transactions on their own without interference of the other shards. With Category 1 transactions that is doable, with Category 2 transactions sometimes if that address is in the same shard as the smart contract but with Category 3 you definitely need communication between the shards. Solving that requires to make a set of communication rules the protocol needs to follow in order to process all transactions in a generalised fashion.
 
And this is where the downsides of state sharding comes in currently. All shards in Zilliqa have access to the complete state. Yes the state size (0.1 GB at the moment) grows and all of the nodes need to store it but it also means that they don’t need to shop around for information available on other shards. Requiring more communication and adding more complexity. Computer science knowledge and/or developer knowledge required links if you want to dig further: Scilla - language grammar Scilla - Foundations for Verifiable Decentralised Computations on a Blockchain Gas Accounting NUS x Zilliqa: Smart contract language workshop
 
Easier to follow links on programming Scilla https://learnscilla.com/home Ivan on Tech
 
Roadmap / Zilliqa 2.0
 
There is no strict defined roadmap but here are topics being worked on. And via the Zilliqa website there is also more information on the projects they are working on.
 
Business & Partnerships
 
It’s not only technology in which Zilliqa seems to be excelling as their ecosystem has been expanding and starting to grow rapidly. The project is on a mission to provide OpenFinance (OpFi) to the world and Singapore is the right place to be due to its progressive regulations and futuristic thinking. Singapore has taken a proactive approach towards cryptocurrencies by introducing the Payment Services Act 2019 (PS Act). Among other things, the PS Act will regulate intermediaries dealing with certain cryptocurrencies, with a particular focus on consumer protection and anti-money laundering. It will also provide a stable regulatory licensing and operating framework for cryptocurrency entities, effectively covering all crypto businesses and exchanges based in Singapore. According to PWC 82% of the surveyed executives in Singapore reported blockchain initiatives underway and 13% of them have already brought the initiatives live to the market. There is also an increasing list of organizations that are starting to provide digital payment services. Moreover, Singaporean blockchain developers Building Cities Beyond has recently created an innovation $15 million grant to encourage development on its ecosystem. This all suggests that Singapore tries to position itself as (one of) the leading blockchain hubs in the world.
 
Zilliqa seems to already take advantage of this and recently helped launch Hg Exchange on their platform, together with financial institutions PhillipCapital, PrimePartners and Fundnel. Hg Exchange, which is now approved by the Monetary Authority of Singapore (MAS), uses smart contracts to represent digital assets. Through Hg Exchange financial institutions worldwide can use Zilliqa's safe-by-design smart contracts to enable the trading of private equities. For example, think of companies such as Grab, Airbnb, SpaceX that are not available for public trading right now. Hg Exchange will allow investors to buy shares of private companies & unicorns and capture their value before an IPO. Anquan, the main company behind Zilliqa, has also recently announced that they became a partner and shareholder in TEN31 Bank, which is a fully regulated bank allowing for tokenization of assets and is aiming to bridge the gap between conventional banking and the blockchain world. If STOs, the tokenization of assets, and equity trading will continue to increase, then Zilliqa’s public blockchain would be the ideal candidate due to its strategic positioning, partnerships, regulatory compliance and the technology that is being built on top of it.
 
What is also very encouraging is their focus on banking the un(der)banked. They are launching a stablecoin basket starting with XSGD. As many of you know, stablecoins are currently mostly used for trading. However, Zilliqa is actively trying to broaden the use case of stablecoins. I recommend everybody to read this text that Amrit Kumar wrote (one of the co-founders). These stablecoins will be integrated in the traditional markets and bridge the gap between the crypto world and the traditional world. This could potentially revolutionize and legitimise the crypto space if retailers and companies will for example start to use stablecoins for payments or remittances, instead of it solely being used for trading.
 
Zilliqa also released their DeFi strategic roadmap (dating November 2019) which seems to be aligning well with their OpFi strategy. A non-custodial DEX is coming to Zilliqa made by Switcheo which allows cross-chain trading (atomic swaps) between ETH, EOS and ZIL based tokens. They also signed a Memorandum of Understanding for a (soon to be announced) USD stablecoin. And as Zilliqa is all about regulations and being compliant, I’m speculating on it to be a regulated USD stablecoin. Furthermore, XSGD is already created and visible on block explorer and XIDR (Indonesian Stablecoin) is also coming soon via StraitsX. Here also an overview of the Tech Stack for Financial Applications from September 2019. Further quoting Amrit Kumar on this:
 
There are two basic building blocks in DeFi/OpFi though: 1) stablecoins as you need a non-volatile currency to get access to this market and 2) a dex to be able to trade all these financial assets. The rest are built on top of these blocks.
 
So far, together with our partners and community, we have worked on developing these building blocks with XSGD as a stablecoin. We are working on bringing a USD-backed stablecoin as well. We will soon have a decentralised exchange developed by Switcheo. And with HGX going live, we are also venturing into the tokenization space. More to come in the future.”
 
Additionally, they also have this ZILHive initiative that injects capital into projects. There have been already 6 waves of various teams working on infrastructure, innovation and research, and they are not from ASEAN or Singapore only but global: see Grantees breakdown by country. Over 60 project teams from over 20 countries have contributed to Zilliqa's ecosystem. This includes individuals and teams developing wallets, explorers, developer toolkits, smart contract testing frameworks, dapps, etc. As some of you may know, Unstoppable Domains (UD) blew up when they launched on Zilliqa. UD aims to replace cryptocurrency addresses with a human-readable name and allows for uncensorable websites. Zilliqa will probably be the only one able to handle all these transactions onchain due to ability to scale and its resulting low fees which is why the UD team launched this on Zilliqa in the first place. Furthermore, Zilliqa also has a strong emphasis on security, compliance, and privacy, which is why they partnered with companies like Elliptic, ChainSecurity (part of PwC Switzerland), and Incognito. Their sister company Aqilliz (Zilliqa spelled backwards) focuses on revolutionizing the digital advertising space and is doing interesting things like using Zilliqa to track outdoor digital ads with companies like Foodpanda.
 
Zilliqa is listed on nearly all major exchanges, having several different fiat-gateways and recently have been added to Binance’s margin trading and futures trading with really good volume. They also have a very impressive team with good credentials and experience. They don't just have “tech people”. They have a mix of tech people, business people, marketeers, scientists, and more. Naturally, it's good to have a mix of people with different skill sets if you work in the crypto space.
 
Marketing & Community
 
Zilliqa has a very strong community. If you just follow their Twitter their engagement is much higher for a coin that has approximately 80k followers. They also have been ‘coin of the day’ by LunarCrush many times. LunarCrush tracks real-time cryptocurrency value and social data. According to their data, it seems Zilliqa has a more fundamental and deeper understanding of marketing and community engagement than almost all other coins. While almost all coins have been a bit frozen in the last months, Zilliqa seems to be on its own bull run. It was somewhere in the 100s a few months ago and is currently ranked #46 on CoinGecko. Their official Telegram also has over 20k people and is very active, and their community channel which is over 7k now is more active and larger than many other official channels. Their local communities also seem to be growing.
 
Moreover, their community started ‘Zillacracy’ together with the Zilliqa core team ( see www.zillacracy.com ). It’s a community-run initiative where people from all over the world are now helping with marketing and development on Zilliqa. Since its launch in February 2020 they have been doing a lot and will also run their own non-custodial seed node for staking. This seed node will also allow them to start generating revenue for them to become a self sustaining entity that could potentially scale up to become a decentralized company working in parallel with the Zilliqa core team. Comparing it to all the other smart contract platforms (e.g. Cardano, EOS, Tezos etc.) they don't seem to have started a similar initiative (correct me if I’m wrong though). This suggests in my opinion that these other smart contract platforms do not fully understand how to utilize the ‘power of the community’. This is something you cannot ‘buy with money’ and gives many projects in the space a disadvantage.
 
Zilliqa also released two social products called SocialPay and Zeeves. SocialPay allows users to earn ZILs while tweeting with a specific hashtag. They have recently used it in partnership with the Singapore Red Cross for a marketing campaign after their initial pilot program. It seems like a very valuable social product with a good use case. I can see a lot of traditional companies entering the space through this product, which they seem to suggest will happen. Tokenizing hashtags with smart contracts to get network effect is a very smart and innovative idea.
 
Regarding Zeeves, this is a tipping bot for Telegram. They already have 1000s of signups and they plan to keep upgrading it for more and more people to use it (e.g. they recently have added a quiz features). They also use it during AMAs to reward people in real-time. It’s a very smart approach to grow their communities and get familiar with ZIL. I can see this becoming very big on Telegram. This tool suggests, again, that the Zilliqa team has a deeper understanding of what the crypto space and community needs and is good at finding the right innovative tools to grow and scale.
 
To be honest, I haven’t covered everything (i’m also reaching the character limited haha). So many updates happening lately that it's hard to keep up, such as the International Monetary Fund mentioning Zilliqa in their report, custodial and non-custodial Staking, Binance Margin, Futures, Widget, entering the Indian market, and more. The Head of Marketing Colin Miles has also released this as an overview of what is coming next. And last but not least, Vitalik Buterin has been mentioning Zilliqa lately acknowledging Zilliqa and mentioning that both projects have a lot of room to grow. There is much more info of course and a good part of it has been served to you on a silver platter. I invite you to continue researching by yourself :-) And if you have any comments or questions please post here!
submitted by haveyouheardaboutit to CryptoCurrency [link] [comments]

Why i’m bullish on Zilliqa (long read)

Hey all, I've been researching coins since 2017 and have gone through 100s of them in the last 3 years. I got introduced to blockchain via Bitcoin of course, analysed Ethereum thereafter and from that moment I have a keen interest in smart contact platforms. I’m passionate about Ethereum but I find Zilliqa to have a better risk reward ratio. Especially because Zilliqa has found an elegant balance between being secure, decentralised and scalable in my opinion.
 
Below I post my analysis why from all the coins I went through I’m most bullish on Zilliqa (yes I went through Tezos, EOS, NEO, VeChain, Harmony, Algorand, Cardano etc.). Note that this is not investment advice and although it's a thorough analysis there is obviously some bias involved. Looking forward to what you all think!
 
Fun fact: the name Zilliqa is a play on ‘silica’ silicon dioxide which means “Silicon for the high-throughput consensus computer.”
 
This post is divided into (i) Technology, (ii) Business & Partnerships, and (iii) Marketing & Community. I’ve tried to make the technology part readable for a broad audience. If you’ve ever tried understanding the inner workings of Bitcoin and Ethereum you should be able to grasp most parts. Otherwise just skim through and once you are zoning out head to the next part.
 
Technology and some more:
 
Introduction The technology is one of the main reasons why I’m so bullish on Zilliqa. First thing you see on their website is: “Zilliqa is a high-performance, high-security blockchain platform for enterprises and next-generation applications.” These are some bold statements.
 
Before we deep dive into the technology let’s take a step back in time first as they have quite the history. The initial research paper from which Zilliqa originated dates back to August 2016: Elastico: A Secure Sharding Protocol For Open Blockchains where Loi Luu (Kyber Network) is one of the co-authors. Other ideas that led to the development of what Zilliqa has become today are: Bitcoin-NG, collective signing CoSi, ByzCoin and Omniledger.
 
The technical white paper was made public in August 2017 and since then they have achieved everything stated in the white paper and also created their own open source intermediate level smart contract language called Scilla (functional programming language similar to OCaml) too.
 
Mainnet is live since end of January 2019 with daily transaction rate growing continuously. About a week ago mainnet reached 5 million transactions, 500.000+ addresses in total along with 2400 nodes keeping the network decentralised and secure. Circulating supply is nearing 11 billion and currently only mining rewards are left. Maximum supply is 21 billion with annual inflation being 7.13% currently and will only decrease with time.
 
Zilliqa realised early on that the usage of public cryptocurrencies and smart contracts were increasing but decentralised, secure and scalable alternatives were lacking in the crypto space. They proposed to apply sharding onto a public smart contract blockchain where the transaction rate increases almost linear with the increase in amount of nodes. More nodes = higher transaction throughput and increased decentralisation. Sharding comes in many forms and Zilliqa uses network-, transaction- and computational sharding. Network sharding opens up the possibility of using transaction- and computational sharding on top. Zilliqa does not use state sharding for now. We’ll come back to this later.
 
Before we continue disecting how Zilliqa achieves such from a technological standpoint it’s good to keep in mind that a blockchain being decentralised and secure and scalable is still one of the main hurdles in allowing widespread usage of decentralised networks. In my opinion this needs to be solved first before blockchains can get to the point where they can create and add large scale value. So I invite you to read the next section to grasp the underlying fundamentals. Because after all these premises need to be true otherwise there isn’t a fundamental case to be bullish on Zilliqa, right?
 
Down the rabbit hole
 
How have they achieved this? Let’s define the basics first: key players on Zilliqa are the users and the miners. A user is anybody who uses the blockchain to transfer funds or run smart contracts. Miners are the (shard) nodes in the network who run the consensus protocol and get rewarded for their service in Zillings (ZIL). The mining network is divided into several smaller networks called shards, which is also referred to as ‘network sharding’. Miners subsequently are randomly assigned to a shard by another set of miners called DS (Directory Service) nodes. The regular shards process transactions and the outputs of these shards are eventually combined by the DS shard as they reach consensus on the final state. More on how these DS shards reach consensus (via pBFT) will be explained later on.
 
The Zilliqa network produces two types of blocks: DS blocks and Tx blocks. One DS Block consists of 100 Tx Blocks. And as previously mentioned there are two types of nodes concerned with reaching consensus: shard nodes and DS nodes. Becoming a shard node or DS node is being defined by the result of a PoW cycle (Ethash) at the beginning of the DS Block. All candidate mining nodes compete with each other and run the PoW (Proof-of-Work) cycle for 60 seconds and the submissions achieving the highest difficulty will be allowed on the network. And to put it in perspective: the average difficulty for one DS node is ~ 2 Th/s equaling 2.000.000 Mh/s or 55 thousand+ GeForce GTX 1070 / 8 GB GPUs at 35.4 Mh/s. Each DS Block 10 new DS nodes are allowed. And a shard node needs to provide around 8.53 GH/s currently (around 240 GTX 1070s). Dual mining ETH/ETC and ZIL is possible and can be done via mining software such as Phoenix and Claymore. There are pools and if you have large amounts of hashing power (Ethash) available you could mine solo.
 
The PoW cycle of 60 seconds is a peak performance and acts as an entry ticket to the network. The entry ticket is called a sybil resistance mechanism and makes it incredibly hard for adversaries to spawn lots of identities and manipulate the network with these identities. And after every 100 Tx Blocks which corresponds to roughly 1,5 hour this PoW process repeats. In between these 1,5 hour no PoW needs to be done meaning Zilliqa’s energy consumption to keep the network secure is low. For more detailed information on how mining works click here.
Okay, hats off to you. You have made it this far. Before we go any deeper down the rabbit hole we first must understand why Zilliqa goes through all of the above technicalities and understand a bit more what a blockchain on a more fundamental level is. Because the core of Zilliqa’s consensus protocol relies on the usage of pBFT (practical Byzantine Fault Tolerance) we need to know more about state machines and their function. Navigate to Viewblock, a Zilliqa block explorer, and just come back to this article. We will use this site to navigate through a few concepts.
 
We have established that Zilliqa is a public and distributed blockchain. Meaning that everyone with an internet connection can send ZILs, trigger smart contracts etc. and there is no central authority who fully controls the network. Zilliqa and other public and distributed blockchains (like Bitcoin and Ethereum) can also be defined as state machines.
 
Taking the liberty of paraphrasing examples and definitions given by Samuel Brooks’ medium article, he describes the definition of a blockchain (like Zilliqa) as:
“A peer-to-peer, append-only datastore that uses consensus to synchronise cryptographically-secure data”.
 
Next he states that: >“blockchains are fundamentally systems for managing valid state transitions”.* For some more context, I recommend reading the whole medium article to get a better grasp of the definitions and understanding of state machines. Nevertheless, let’s try to simplify and compile it into a single paragraph. Take traffic lights as an example: all its states (red, amber and green) are predefined, all possible outcomes are known and it doesn’t matter if you encounter the traffic light today or tomorrow. It will still behave the same. Managing the states of a traffic light can be done by triggering a sensor on the road or pushing a button resulting in one traffic lights’ state going from green to red (via amber) and another light from red to green.
 
With public blockchains like Zilliqa this isn’t so straightforward and simple. It started with block #1 almost 1,5 years ago and every 45 seconds or so a new block linked to the previous block is being added. Resulting in a chain of blocks with transactions in it that everyone can verify from block #1 to the current #647.000+ block. The state is ever changing and the states it can find itself in are infinite. And while the traffic light might work together in tandem with various other traffic lights, it’s rather insignificant comparing it to a public blockchain. Because Zilliqa consists of 2400 nodes who need to work together to achieve consensus on what the latest valid state is while some of these nodes may have latency or broadcast issues, drop offline or are deliberately trying to attack the network etc.
 
Now go back to the Viewblock page take a look at the amount of transaction, addresses, block and DS height and then hit refresh. Obviously as expected you see new incremented values on one or all parameters. And how did the Zilliqa blockchain manage to transition from a previous valid state to the latest valid state? By using pBFT to reach consensus on the latest valid state.
 
After having obtained the entry ticket, miners execute pBFT to reach consensus on the ever changing state of the blockchain. pBFT requires a series of network communication between nodes, and as such there is no GPU involved (but CPU). Resulting in the total energy consumed to keep the blockchain secure, decentralised and scalable being low.
 
pBFT stands for practical Byzantine Fault Tolerance and is an optimisation on the Byzantine Fault Tolerant algorithm. To quote Blockonomi: “In the context of distributed systems, Byzantine Fault Tolerance is the ability of a distributed computer network to function as desired and correctly reach a sufficient consensus despite malicious components (nodes) of the system failing or propagating incorrect information to other peers.” Zilliqa is such a distributed computer network and depends on the honesty of the nodes (shard and DS) to reach consensus and to continuously update the state with the latest block. If pBFT is a new term for you I can highly recommend the Blockonomi article.
 
The idea of pBFT was introduced in 1999 - one of the authors even won a Turing award for it - and it is well researched and applied in various blockchains and distributed systems nowadays. If you want more advanced information than the Blockonomi link provides click here. And if you’re in between Blockonomi and University of Singapore read the Zilliqa Design Story Part 2 dating from October 2017.
Quoting from the Zilliqa tech whitepaper: “pBFT relies upon a correct leader (which is randomly selected) to begin each phase and proceed when the sufficient majority exists. In case the leader is byzantine it can stall the entire consensus protocol. To address this challenge, pBFT offers a view change protocol to replace the byzantine leader with another one.”
 
pBFT can tolerate ⅓ of the nodes being dishonest (offline counts as Byzantine = dishonest) and the consensus protocol will function without stalling or hiccups. Once there are more than ⅓ of dishonest nodes but no more than ⅔ the network will be stalled and a view change will be triggered to elect a new DS leader. Only when more than ⅔ of the nodes are dishonest (>66%) double spend attacks become possible.
 
If the network stalls no transactions can be processed and one has to wait until a new honest leader has been elected. When the mainnet was just launched and in its early phases, view changes happened regularly. As of today the last stalling of the network - and view change being triggered - was at the end of October 2019.
 
Another benefit of using pBFT for consensus besides low energy is the immediate finality it provides. Once your transaction is included in a block and the block is added to the chain it’s done. Lastly, take a look at this article where three types of finality are being defined: probabilistic, absolute and economic finality. Zilliqa falls under the absolute finality (just like Tendermint for example). Although lengthy already we skipped through some of the inner workings from Zilliqa’s consensus: read the Zilliqa Design Story Part 3 and you will be close to having a complete picture on it. Enough about PoW, sybil resistance mechanism, pBFT etc. Another thing we haven’t looked at yet is the amount of decentralisation.
 
Decentralisation
 
Currently there are four shards, each one of them consisting of 600 nodes. 1 shard with 600 so called DS nodes (Directory Service - they need to achieve a higher difficulty than shard nodes) and 1800 shard nodes of which 250 are shard guards (centralised nodes controlled by the team). The amount of shard guards has been steadily declining from 1200 in January 2019 to 250 as of May 2020. On the Viewblock statistics you can see that many of the nodes are being located in the US but those are only the (CPU parts of the) shard nodes who perform pBFT. There is no data from where the PoW sources are coming. And when the Zilliqa blockchain starts reaching their transaction capacity limit, a network upgrade needs to be executed to lift the current cap of maximum 2400 nodes to allow more nodes and formation of more shards which will allow to network to keep on scaling according to demand.
Besides shard nodes there are also seed nodes. The main role of seed nodes is to serve as direct access points (for end users and clients) to the core Zilliqa network that validates transactions. Seed nodes consolidate transaction requests and forward these to the lookup nodes (another type of nodes) for distribution to the shards in the network. Seed nodes also maintain the entire transaction history and the global state of the blockchain which is needed to provide services such as block explorers. Seed nodes in the Zilliqa network are comparable to Infura on Ethereum.
 
The seed nodes were first only operated by Zilliqa themselves, exchanges and Viewblock. Operators of seed nodes like exchanges had no incentive to open them for the greater public.They were centralised at first. Decentralisation at the seed nodes level has been steadily rolled out since March 2020 ( Zilliqa Improvement Proposal 3 ). Currently the amount of seed nodes is being increased, they are public facing and at the same time PoS is applied to incentivize seed node operators and make it possible for ZIL holders to stake and earn passive yields. Important distinction: seed nodes are not involved with consensus! That is still PoW as entry ticket and pBFT for the actual consensus.
 
5% of the block rewards are being assigned to seed nodes (from the beginning in 2019) and those are being used to pay out ZIL stakers.The 5% block rewards with an annual yield of 10.03% translates to roughly 610 MM ZILs in total that can be staked. Exchanges use the custodial variant of staking and wallets like Moonlet will use the non custodial version (starting in Q3 2020). Staking is being done by sending ZILs to a smart contract created by Zilliqa and audited by Quantstamp.
 
With a high amount of DS & shard nodes and seed nodes becoming more decentralised too, Zilliqa qualifies for the label of decentralised in my opinion.
 
Smart contracts
 
Let me start by saying I’m not a developer and my programming skills are quite limited. So I‘m taking the ELI5 route (maybe 12) but if you are familiar with Javascript, Solidity or specifically OCaml please head straight to Scilla - read the docs to get a good initial grasp of how Zilliqa’s smart contract language Scilla works and if you ask yourself “why another programming language?” check this article. And if you want to play around with some sample contracts in an IDE click here. Faucet can be found here. And more information on architecture, dapp development and API can be found on the Developer Portal.
If you are more into listening and watching: check this recent webinar explaining Zilliqa and Scilla. Link is time stamped so you’ll start right away with a platform introduction, R&D roadmap 2020 and afterwards a proper Scilla introduction.
 
Generalised: programming languages can be divided into being ‘object oriented’ or ‘functional’. Here is an ELI5 given by software development academy: > “all programmes have two basic components, data – what the programme knows – and behaviour – what the programme can do with that data. So object-oriented programming states that combining data and related behaviours in one place, is called “object”, which makes it easier to understand how a particular program works. On the other hand, functional programming argues that data and behaviour are different things and should be separated to ensure their clarity.”
 
Scilla is on the functional side and shares similarities with OCaml: > OCaml is a general purpose programming language with an emphasis on expressiveness and safety. It has an advanced type system that helps catch your mistakes without getting in your way. It's used in environments where a single mistake can cost millions and speed matters, is supported by an active community, and has a rich set of libraries and development tools. For all its power, OCaml is also pretty simple, which is one reason it's often used as a teaching language.
 
Scilla is blockchain agnostic, can be implemented onto other blockchains as well, is recognised by academics and won a so called Distinguished Artifact Award award at the end of last year.
 
One of the reasons why the Zilliqa team decided to create their own programming language focused on preventing smart contract vulnerabilities safety is that adding logic on a blockchain, programming, means that you cannot afford to make mistakes. Otherwise it could cost you. It’s all great and fun blockchains being immutable but updating your code because you found a bug isn’t the same as with a regular web application for example. And with smart contracts it inherently involves cryptocurrencies in some form thus value.
 
Another difference with programming languages on a blockchain is gas. Every transaction you do on a smart contract platform like Zilliqa for Ethereum costs gas. With gas you basically pay for computational costs. Sending a ZIL from address A to address B costs 0.001 ZIL currently. Smart contracts are more complex, often involve various functions and require more gas (if gas is a new concept click here ).
 
So with Scilla, similar to Solidity, you need to make sure that “every function in your smart contract will run as expected without hitting gas limits. An improper resource analysis may lead to situations where funds may get stuck simply because a part of the smart contract code cannot be executed due to gas limits. Such constraints are not present in traditional software systems”. Scilla design story part 1
 
Some examples of smart contract issues you’d want to avoid are: leaking funds, ‘unexpected changes to critical state variables’ (example: someone other than you setting his or her address as the owner of the smart contract after creation) or simply killing a contract.
 
Scilla also allows for formal verification. Wikipedia to the rescue:
In the context of hardware and software systems, formal verification is the act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics.
 
Formal verification can be helpful in proving the correctness of systems such as: cryptographic protocols, combinational circuits, digital circuits with internal memory, and software expressed as source code.
 
Scilla is being developed hand-in-hand with formalization of its semantics and its embedding into the Coq proof assistant — a state-of-the art tool for mechanized proofs about properties of programs.”
 
Simply put, with Scilla and accompanying tooling developers can be mathematically sure and proof that the smart contract they’ve written does what he or she intends it to do.
 
Smart contract on a sharded environment and state sharding
 
There is one more topic I’d like to touch on: smart contract execution in a sharded environment (and what is the effect of state sharding). This is a complex topic. I’m not able to explain it any easier than what is posted here. But I will try to compress the post into something easy to digest.
 
Earlier on we have established that Zilliqa can process transactions in parallel due to network sharding. This is where the linear scalability comes from. We can define simple transactions: a transaction from address A to B (Category 1), a transaction where a user interacts with one smart contract (Category 2) and the most complex ones where triggering a transaction results in multiple smart contracts being involved (Category 3). The shards are able to process transactions on their own without interference of the other shards. With Category 1 transactions that is doable, with Category 2 transactions sometimes if that address is in the same shard as the smart contract but with Category 3 you definitely need communication between the shards. Solving that requires to make a set of communication rules the protocol needs to follow in order to process all transactions in a generalised fashion.
 
And this is where the downsides of state sharding comes in currently. All shards in Zilliqa have access to the complete state. Yes the state size (0.1 GB at the moment) grows and all of the nodes need to store it but it also means that they don’t need to shop around for information available on other shards. Requiring more communication and adding more complexity. Computer science knowledge and/or developer knowledge required links if you want to dig further: Scilla - language grammar Scilla - Foundations for Verifiable Decentralised Computations on a Blockchain Gas Accounting NUS x Zilliqa: Smart contract language workshop
 
Easier to follow links on programming Scilla https://learnscilla.com/home Ivan on Tech
 
Roadmap / Zilliqa 2.0
 
There is no strict defined roadmap but here are topics being worked on. And via the Zilliqa website there is also more information on the projects they are working on.
 
Business & Partnerships  
It’s not only technology in which Zilliqa seems to be excelling as their ecosystem has been expanding and starting to grow rapidly. The project is on a mission to provide OpenFinance (OpFi) to the world and Singapore is the right place to be due to its progressive regulations and futuristic thinking. Singapore has taken a proactive approach towards cryptocurrencies by introducing the Payment Services Act 2019 (PS Act). Among other things, the PS Act will regulate intermediaries dealing with certain cryptocurrencies, with a particular focus on consumer protection and anti-money laundering. It will also provide a stable regulatory licensing and operating framework for cryptocurrency entities, effectively covering all crypto businesses and exchanges based in Singapore. According to PWC 82% of the surveyed executives in Singapore reported blockchain initiatives underway and 13% of them have already brought the initiatives live to the market. There is also an increasing list of organisations that are starting to provide digital payment services. Moreover, Singaporean blockchain developers Building Cities Beyond has recently created an innovation $15 million grant to encourage development on its ecosystem. This all suggest that Singapore tries to position itself as (one of) the leading blockchain hubs in the world.
 
Zilliqa seems to already taking advantage of this and recently helped launch Hg Exchange on their platform, together with financial institutions PhillipCapital, PrimePartners and Fundnel. Hg Exchange, which is now approved by the Monetary Authority of Singapore (MAS), uses smart contracts to represent digital assets. Through Hg Exchange financial institutions worldwide can use Zilliqa's safe-by-design smart contracts to enable the trading of private equities. For example, think of companies such as Grab, AirBnB, SpaceX that are not available for public trading right now. Hg Exchange will allow investors to buy shares of private companies & unicorns and capture their value before an IPO. Anquan, the main company behind Zilliqa, has also recently announced that they became a partner and shareholder in TEN31 Bank, which is a fully regulated bank allowing for tokenization of assets and is aiming to bridge the gap between conventional banking and the blockchain world. If STOs, the tokenization of assets, and equity trading will continue to increase, then Zilliqa’s public blockchain would be the ideal candidate due to its strategic positioning, partnerships, regulatory compliance and the technology that is being built on top of it.
 
What is also very encouraging is their focus on banking the un(der)banked. They are launching a stablecoin basket starting with XSGD. As many of you know, stablecoins are currently mostly used for trading. However, Zilliqa is actively trying to broaden the use case of stablecoins. I recommend everybody to read this text that Amrit Kumar wrote (one of the co-founders). These stablecoins will be integrated in the traditional markets and bridge the gap between the crypto world and the traditional world. This could potentially revolutionize and legitimise the crypto space if retailers and companies will for example start to use stablecoins for payments or remittances, instead of it solely being used for trading.
 
Zilliqa also released their DeFi strategic roadmap (dating November 2019) which seems to be aligning well with their OpFi strategy. A non-custodial DEX is coming to Zilliqa made by Switcheo which allows cross-chain trading (atomic swaps) between ETH, EOS and ZIL based tokens. They also signed a Memorandum of Understanding for a (soon to be announced) USD stablecoin. And as Zilliqa is all about regulations and being compliant, I’m speculating on it to be a regulated USD stablecoin. Furthermore, XSGD is already created and visible on block explorer and XIDR (Indonesian Stablecoin) is also coming soon via StraitsX. Here also an overview of the Tech Stack for Financial Applications from September 2019. Further quoting Amrit Kumar on this:
 
There are two basic building blocks in DeFi/OpFi though: 1) stablecoins as you need a non-volatile currency to get access to this market and 2) a dex to be able to trade all these financial assets. The rest are build on top of these blocks.
 
So far, together with our partners and community, we have worked on developing these building blocks with XSGD as a stablecoin. We are working on bringing a USD-backed stablecoin as well. We will soon have a decentralised exchange developed by Switcheo. And with HGX going live, we are also venturing into the tokenization space. More to come in the future.”*
 
Additionally, they also have this ZILHive initiative that injects capital into projects. There have been already 6 waves of various teams working on infrastructure, innovation and research, and they are not from ASEAN or Singapore only but global: see Grantees breakdown by country. Over 60 project teams from over 20 countries have contributed to Zilliqa's ecosystem. This includes individuals and teams developing wallets, explorers, developer toolkits, smart contract testing frameworks, dapps, etc. As some of you may know, Unstoppable Domains (UD) blew up when they launched on Zilliqa. UD aims to replace cryptocurrency addresses with a human readable name and allows for uncensorable websites. Zilliqa will probably be the only one able to handle all these transactions onchain due to ability to scale and its resulting low fees which is why the UD team launched this on Zilliqa in the first place. Furthermore, Zilliqa also has a strong emphasis on security, compliance, and privacy, which is why they partnered with companies like Elliptic, ChainSecurity (part of PwC Switzerland), and Incognito. Their sister company Aqilliz (Zilliqa spelled backwards) focuses on revolutionizing the digital advertising space and is doing interesting things like using Zilliqa to track outdoor digital ads with companies like Foodpanda.
 
Zilliqa is listed on nearly all major exchanges, having several different fiat-gateways and recently have been added to Binance’s margin trading and futures trading with really good volume. They also have a very impressive team with good credentials and experience. They dont just have “tech people”. They have a mix of tech people, business people, marketeers, scientists, and more. Naturally, it's good to have a mix of people with different skill sets if you work in the crypto space.
 
Marketing & Community
 
Zilliqa has a very strong community. If you just follow their Twitter their engagement is much higher for a coin that has approximately 80k followers. They also have been ‘coin of the day’ by LunarCrush many times. LunarCrush tracks real-time cryptocurrency value and social data. According to their data it seems Zilliqa has a more fundamental and deeper understanding of marketing and community engagement than almost all other coins. While almost all coins have been a bit frozen in the last months, Zilliqa seems to be on its own bull run. It was somewhere in the 100s a few months ago and is currently ranked #46 on CoinGecko. Their official Telegram also has over 20k people and is very active, and their community channel which is over 7k now is more active and larger than many other official channels. Their local communities) also seem to be growing.
 
Moreover, their community started ‘Zillacracy’ together with the Zilliqa core team ( see www.zillacracy.com ). It’s a community run initiative where people from all over the world are now helping with marketing and development on Zilliqa. Since its launch in February 2020 they have been doing a lot and will also run their own non custodial seed node for staking. This seed node will also allow them to start generating revenue for them to become a self sustaining entity that could potentially scale up to become a decentralized company working in parallel with the Zilliqa core team. Comparing it to all the other smart contract platforms (e.g. Cardano, EOS, Tezos etc.) they don't seem to have started a similar initiatives (correct me if I’m wrong though). This suggest in my opinion that these other smart contract platforms do not fully understand how to utilize the ‘power of the community’. This is something you cannot ‘buy with money’ and gives many projects in the space a disadvantage.
 
Zilliqa also released two social products called SocialPay and Zeeves. SocialPay allows users to earn ZILs while tweeting with a specific hashtag. They have recently used it in partnership with the Singapore Red Cross for a marketing campaign after their initial pilot program. It seems like a very valuable social product with a good use case. I can see a lot of traditional companies entering the space through this product, which they seem to suggest will happen. Tokenizing hashtags with smart contracts to get network effect is a very smart and innovative idea.
 
Regarding Zeeves, this is a tipping bot for Telegram. They already have 1000s of signups and they plan to keep upgrading it for more and more people to use it (e.g. they recently have added a quiz features). They also use it during AMAs to reward people in real time. It’s a very smart approach to grow their communities and get familiar with ZIL. I can see this becoming very big on Telegram. This tool suggests, again, that the Zilliqa team has a deeper understanding what the crypto space and community needs and is good at finding the right innovative tools to grow and scale.
 
To be honest, I haven’t covered everything (i’m also reaching the character limited haha). So many updates happening lately that it's hard to keep up, such as the International Monetary Fund mentioning Zilliqa in their report, custodial and non-custodial Staking, Binance Margin, Futures & Widget, entering the Indian market, and more. The Head of Marketing Colin Miles has also released this as an overview of what is coming next. And last but not least, Vitalik Buterin has been mentioning Zilliqa lately acknowledging Zilliqa and mentioning that both projects have a lot of room to grow. There is much more info of course and a good part of it has been served to you on a silver platter. I invite you to continue researching by yourself :-) And if you have any comments or questions please post here!
submitted by haveyouheardaboutit to CryptoCurrency [link] [comments]

MarcoPoloProtocol #2 #AMA Summary: #AMAwithKenny - How Marcopolo Protocol Designs a New P2P E-cash System?

MarcoPoloProtocol #2 #AMA Summary: #AMAwithKenny - How Marcopolo Protocol Designs a New P2P E-cash System?
#AMAwithKenny
On February 21, MarcoPolo Protocol hosted the second online AMA on MarcoPolo Protocol English Telegram group (https://t.me/MarcoPoloMAP) and they invited MarcoPolo Protocol Core Developer Kenny as our guest to answer questions from the community.
Here are the highlights from the AMA event.
#AMAwithKenny Question 1: Please introduce yourself to everyone, also, could you tell us what is MarcoPolo Protocol?
Hello everybody, I’m Kenny. I got into the field of blockchain at the end of 2016, mainly working on the research and development of public chain consensus protocols. At the moment, I’m a core developer of MarcoPolo Protocol.
MarcoPolo Protocol is an open-source blockchain protocol strategically invested by Softbank. It is a new peer-to-peer electronic cash system infrastructure and aims to achieve shared resources and intelligent inter-chain scheduling and contribute to decentralized applications with better scalability and lower transaction fees among the global blockchain networks.
#AMAwithKenny Question 2: Please describe the progress of your project in the past year.
From the perspective of funding and partnership, Marcopolo protocol was caught sight by Softbank last year and got Softbank investment, followed by being listed on the renowned exchange Kucoin. Its partners range from the global international electronic payment company Ksher , as well as a strategic partnership from several chains including fundamental developer forces and industrial resources.
According to the first step of the road map, gravity has been implemented. It is a central component, and currently supports the resource sharing of truechain and Ethereum
Based on gravity, DAPP Marcopay and POB community governance DAPP are implemented.
Building a technology community and interacting with developers through online AMA, discord, and riot has attracted many developers' attention to map.
At the beginning of the development of POC-1, the consensus, output block and RPC module of POC-1 are constructed based on the rule language.
In terms of product and applications, MarcoPolo protocol has already implemented a wallet application called MarcoPay, which is used widely in the community. Roughly 70,000 users and community members at the moment. And it's growing rapidly.
We have cooperated with a company named Ksher, a global electronic payment company. At present, payment in some stores has been implemented in Thailand.
We have developed communities of over 70000 people, covering South Korea, Nigeria, India, Vietnam, Indonesia, Turkey, Brazil, Australia, Russia. Also,our token has been listed on CoinMarketCap and Coingecko.
#AMAwithKenny Question 3: Compared to Polkadot and Cosmos, what are the distinct and innovative parts of MarcoPolo Protocol? Is it still necessary to develop other public chains?
Polkadot empowers blockchain networks to work together under the protection of shared security; Cosmos is a decentralized network of independent parallel blockchains and powered by BFT consensus algorithms like Tendermint consensus.
Similar to the other two projects, MarcoPolo Protocol could improve the scalability and interoperability of the blockchain network.
The main differences are the properties of shared resources and intelligent inter-chain scheduling. This is MarcoPolo Protocol’s innovation.After Satoshi Nakamoto proposed the p2p e-cash system, various public chains proposed a variety of solutions in order to accomplish the p2p payment without a third party. With the future popularity of digital encrypted currency payment and exchange, increasing TPS is highly demanded. Relying only on a single chain to process transactions would not significantly improve the processing speed. Therefore, more chains are needed to perform collectively in order to achieve TPS sharing.
MarcoPolo Protocol intends to provide a convenient and economical way of p2p payment while fully utilizing resources from different blockchains.
#AMAwithKenny Question 4: Why do you join MarcoPolo Protocol? What attracts you the most?
MarcoPolo Protocol is an open-source community, which is formed by a large number of blockchain open source technology enthusiasts and professionals. People applied their expertise and innovative ideas in many perspectives such as theoretical research, promotion, coding, system engineering, etc.MarcoPolo Protocol is committed to creating a new peer-to-peer electronic cash system that truly fulfills the vision of Nakamoto.
#AMAwithKenny Question 5: Can you tell us more about how MarcoPolo Protocol achieves resource sharing among chains? What is the mechanism to accomplish intelligent inter-chain scheduling?
Good questions, let me answer them together.
In terms of sharing resources across chains, in MarcoPolo Protocol, we select the appropriate synerchain as the target isomorphic chain to process transactions and then realize the TPS sharing of synerchains through the MTP protocol;
MarcoPolo Protocol selects a secure, efficient, and low-cost third-party public chain as the target heterogeneous chain, and then uses the computing power of the target chain to deal with transactions. The process requires the related interact-chain to interact with the target chain through BRPC and then moving applications and transactions to the target chain in order to finish the transactions. Therefore, the target chain is considered as Layer 2 and processes the transactions in MarcoPolo Protocol. In this way, the TPS capacity of the whole MarcoPolo Protocol network can be expanded by stacking multiple target chains.
Intelligent scheduling across chains is the core of MarcoPolo Protocol. We wish to build interactions across chains through MTP+BRPC, so as to solve the problem of inter-chain interoperability.
**#AMAwithKenny Question 6: As most people know, the ecosystem determines the future of public chains. How does MarcoPolo Protocol think about its ecological construction?**First of all, I highly agree with the significance of the ecosystem for public chains. We believe that there is a natural demand for payment, exchange, and borrowing of digital cryptocurrencies in the future.
MarcoPolo Protocol primarily targets three application scenarios: Dpayment, DEX, and Defi. We are building an E-cash system infrastructure that will serve the whole blockchain ecosystem.
In the electronic cash system, DEX and Defi are two important applications. Currently, DEX and Defi are structured in a single chain or ecosystem. We wish that DAPPs built on MarcoPolo Protocol could share DEX and Defi achievements of other ecosystems.
#AMAwithKenny Question 7: How many people in the MarcoPolo Protocol team?
Now the team has more than 30 people, more than 20 developers, 5 marketing staff, 7 operating staff, and distributed offices in many countries. They come from Singapore, Thailand, South Korea, China, Australia, Brazil, Indonesia, Turkey, and other countries.
#AMAwithKenny Question 8: What does the technical roadmap look like and what more is coming from MarcoPolo Protocol? At which development stage is it in right now?
MarcoPolo Protocol Road Map:The First Phase 2019.Q1Gravity: the interoperability module Gravity has been developed, which realized useable interoperability between third-generation public chain and high consensus digital currency such as BTC and ETH, will share computing power and performance gradually expand to more high-performance public chain through interoperability.The Second Phase 2019.Q3 Electromagnet: Online retail payment application - MarcoPay POB Community Governance DAPP.The Third Phase 2020.Q4Interaction: Independently developed MarcoPolo main Network StandardChain will be launched, achieving APoS, On-chain governance, Architecture design of InteractChain.The Fourth Phase 2022.Q2Grand Unification: The realization of heterogeneous cross-chain operability, resource sharing between chains. At present, core team members concentrate on the research and design the protocol. StandardChain is still in its early-stage proof of concept.
Also, during this stage, we launched MarcoPay DAPP. You can download it here and use it:
https://www.marcopolopay.org/download
In addition, we are having an Airdrop activity right now (https://t.me/MarcoPay_Airdrop_Bot), you can join it and win MAPC and swap your MAPC to MAP on MarcoPay APP
#AMAwithKenny Question 9: MarcoPolo Protocol technical community has attracted developers from Bitcoin and Cosmos. Could you please talk about how to participate in the technical community?
MarcoPolo Protocol technical community encourages researchers, technology evangelists, and developers to participate. After the protocol passed through the PoC phase and was able to be implemented in modules, we definitely need more developers to join the community. After sorting out the research topics, we will share with everyone on the Discord and Riot channels in our technical community.Click the link below to join the community to know the up-to-date progress and take part in the discussion.Discord: https://discord.gg/KTJ2QzcRiot: https://riot.im/app/#/room/#MarcoPolo:matrix.org
**#AMAwithKenny Question 10: Developers are essential to the development of the technical community. Could you please tell us what are the incentives of MarcoPolo protocol in this regard?**Regarding the motivation mechanism, as far as I know, MarcoPolo has set up a pool of 9% of MAP for incentives to the technical community. The incentives are mainly used in two aspects: technology and community.
Technology has three parts: research, technical promotion, and development;
Community includes online AMA, Meet-ups, Workshops, etc. MarcoPolo Protocol has been actively exploring ways to motivate great and talented people to get involved in this open-source project through different levels.
#AMAwithKenny Question 11: What’s the advantage of MarcoPolo Protocol’s APoS consensus? What are the advantages and disadvantages compared with PoS? And what problems APoS has solved?
APoS is an Assets Proof of Stake consensus algorithm. There are three advantages:
  1. It supports multiple digital cryptocurrency staking, more people can participate in the ecosystem
  2. it can protect assets, people are still keeping their original BTC or ETH, they just staked into APoS system
  3. It solves the problem of easy centralization in the traditional PoS/DPoS consensus mechanism
#AMAwithKenny Question 12: Are there any new products coming out soon?
Good questions, there is an exciting product coming up soon
We are launching a new build-in product Milione in 5 days.
I can share some of its features:
  1. You can deposit MAP to earn USDT
  2. The yield rate is higher than most of the products on the market
  3. No lock-up, you can withdraw your asset anytime you want
  4. Profitable referral mechanism, you can invite your friends and earn money with them together.
Besides products, there are many more partnerships coming up, we have reached many other famous companies and exchanges. Please stay tuned, once it is confirmed, we will announce it in this group and our twitter.
#AMAwithKenny Bonus Question 1: Is there anything you want to say on the MAP piece?
Let's wait and see
#AMAwithKenny Bonus Question 2: What payment scenario do you have so far? Where can I use it?
We have launched MacroPay DAPP, you can download it here:
https://www.marcopolopay.org/download
This DAPP can manage digital assets on multiple chains simultaneously, providing a "safe, professional, decentralized" solution for global online and offline product payments. You can use MarcoPay to shop in some stores in Thailand, and more shops will support MarcoPay in Southeast Asia, South America, Africa, Europe, and other countries.
#AMAwithKenny Bonus Question 3: What payment scenario do you have so far? Where can I use it? What is Kenny’s opinion on the future for MacoPolo Protocol Token and what are the most difficult challenges he encountered during the development of MarcoPolo Protocol?
MAP is planning to open more than 1000 retailer shops to do peer to peer payment transactions in 2020 across Brazil, Turkey, Indonesia, Korea, and Thailand. MAP has a strong technical team including developers and researchers around the world to make sure the project will be delivered in high quality and in time.
#AMAwithKenny Bonus Question 4: What’s the business model of MAP? Is building another public chain? Or you want to charge fees for the applications hosting on the chain? How do you make the business sustainable and profitable in the long run? How much return you expect to be able to get in 2-3 years of time frame?
Marcopolo protocol p2p electronic cash infrastructure will enable Defi, Dpayment, Dex applications. Successful applications will help to boost the on-chain ecosystem under MAP infrastructure. And applications who make profits will feedback to the community in the means of MAP token only. We will make sure the community and application developers win-win together then we can achieve a long term success.
MAP since listed in Kucoin 3 months ago has already gained 4 folds. We are pretty sure the token price will go up further according to the current strategy.
___________________________________________________________
Recent Activity:
Airdrop, join it on Telegram at https://t.me/MarcoPay_Airdrop_Bot
💰Per Participant: 200 MAPC
💰Each Invite: 50 MAPC
💰You can withdraw MAPC to your wallet instantly and swap MAPC to MAP on MarcoPay
⬇️Start Receiving Airdrops:
https://t.me/MarcoPay_Airdrop_Bot
⬇️Download MarcoPay:
https://www.marcopolopay.org/download
___________________________________________________________
Twitter:
https://twitter.com/marcopologlobal
Telegram:
English: https://t.me/MarcoPoloMAP
Chanel: https://t.me/MAP_POB_channel
Indonesia: https://t.me/MarcoPayindonesia
Russia: https://t.me/MarcoPayrussian
Turkey https://t.me/marcopoloturkiye
Vietnam: https://t.me/marcopolovietnam
Bangladesh: https://t.me/MarcoPay_BD
submitted by SamJia to MarcoPoloProtocol [link] [comments]

A Grab Bag of Thoughts on ETC and Forks

1) Three months ago I made a statement in an interview with Morgen Peck as follows:
I generally support just about every secession attempt that comes along,” he says. “If in the future there is that kind of a dispute in Ethereum, I’d definitely be quite happy to see Ethereum A go in one direction and Ethereum B go the other.
I do have principles, and this is a principle that I have so far held consistently. It would of course be grossly hypocritical for me to (correctly) decry bitcoin maximalism back in 2014, and then start shouting "one chain to rule them all! network effects!" the moment it becomes suitable to me. Rather, I believe, just as I had stated in my 2014 post on silos, that:
If there truly is one consensus mechanism that is best, why should we not have a large merger between the various projects, come up with the best kind of decentralized computer to push forward as a basis for the crypto-economy, and move forward together under one unified system? In some respects, this seems noble; “fragmentation” certainly has undesirable properties, and it is natural to see “working together” as a good thing. In reality, however, while more cooperation is certainly useful, and this blog post will later describe how and why, desires for extreme consolidation or winner-take-all are to a large degree exactly wrong – not only is fragmentation not all that bad, but rather it’s inevitable, and arguably the only way that this space can reasonably prosper.
I personally admittedly find ETC's social contract, community and raison d'être less exciting and satisfying and would not personally feel the same passion for it that I do for ETH, but this is simply my judgement, and the judgement of the very many members of the community that have voted or otherwise expressed assent to the fork. Anyone who feels sufficiently strongly in the other direction is welcome to focus on the ETC chain, and we will see if it remains viable.
2) But those were just my beliefs and intermediate values. How do we know that this "let a hundred flowers bloom" position is actually correct? We can actually discover a lot of facts from the current situation. First of all, though we can see that the price of ETH + ETC has been remarkably stable around $14.3 for the past 2.5 days, despite great volatility in each component. This is still early-stage, but suggests that the value of at least the cryptocurrency component of the ecosystem actually isn't a superlinear function that favors monopoly.
Second, we can see from several sources (including exchange order books, but also public pronouncements from Barry Silbert et al) that incoming interest into ETC is actually coming from the bitcoin side even more than it is from the ethereum side. And this is a core tenet of blockchain pluralism: by leaving open an option to join an alternate system if an individual so chooses, you can satisfy the varying needs of larger groups of people.
3) I may as well offer my own views on hard forking. I do not believe that using hard forks as a primary paradigm to resolve thefts or to deal with unethical applications is a long-term viable strategy. This time, we got very lucky that the stolen DAO ETH were conveniently stuck in a known address for 35 days. Next time around, the funds will likely be being sold on exchanges before the developers even know it, and the only solution will be a rollback - and Casper will make rollbacks infeasible due to its economic finality mechanism in any case.
3b) "Evil dapps" can constantly move their contracts around in ways that evade a necessarily slow-moving hard fork, so while we can annoy them, "softer" means of mitigating the harm of such applications must necessarily still be sought out.
3c) The blockchain itself is very far from the eventual vision of a hyper-scalable, efficient and secure world computer and will see several more iterations to move closer to that goal; if you wish you may view Casper as a completely independent blockchain that happens to have a 100% state-copying premine from ETH, and in fact this may even be the cleanest way to implement it in code. I personally was okay with a fork in light of this context, together with a philosophical belief that a principle does not need to have literally infinite weight in order to have value.
In the near to mid-term future, I expect that there will be many small applications rather than one big application, and so no single failure will be enough to greatly impact the ecosystem; hence it strikes me as quite unlikely that application rescue hard forks will become a regular thing (note that some disagree; Vlad would love to have hard forks for many more things, though I'll let him defend his own views :) )
At this point, I am hypothetically open to two kinds of application rescue hard forks:
i) A fork in the very unlikely case that the Solidity compiler proves to have a serious bug that puts 5-10 million ETH in danger. ii) There has been a medium amount of ether that has been sent to unspendable addresses because users were using buggy ethereum-js libraries that created the address from the public key incorrectly. I would be OK with a change, for example as part of metropolis, that adds a new transaction type that effectively makes the most common categories of such unspendable addresses spendable by their cryptographically provable rightful owners (but I would only be ok with this with broad consensus and even still it's dependent on technical feasibility and tradeoffs in code complexity).
In the future, I suspect that both possibilities will recede over time.
3d) In the short and medium term, we are still under conditions of high technical uncertainty. For example, Vlad and I continue to argue about whether or not a fixed currency supply can offer sufficient incentives through transaction fees alone to secure the network. If we had agreed, for example, to a "100 million ETH and never a single bit more" principle on day one, we would have dug ourselves into a rather deep hole if the research ends up showing that low inflation (or something more complex, like expected low deflation but the possibility of low inflation under conditions of low Casper participation) is the only safe way forward. Similarly, "it is possible to create a contract that lasts forever" is also something that is economically dangerous to commit to. Hence, principles on these kinds of matters may need to be settled only later.
4) Concerns about moral hazard are, in this case, IMO overblown; on the contrary, despite the fork, I have been extremely impressed by the sheer number of formal verification and other secure contract programming projects that have recently emerged in academia. Writing this from inside the middle of an Ethereum research workshop in Cornell, I am very optimistic that the number of bugs in code will decrease greatly over the next year.
4b) This does however mean that there is now a much larger burden on high-level language developers, and I personally do not have the time or ability to maintain Serpent at a level that I personally find satisfactory. I am personally continuing to use it as a language for experimenting with Casper simulations, but I welcome proposals from the community for how and if it can find a niche in other contexts.
submitted by vbuterin to ethereum [link] [comments]

Christian Decker on why Off-chain Solutions Could Solve Blockchain’s Scalability Issues

Scalability is an issue that has long plagued blockchain developers. It is one of the primary obstacles to a future in which the technology’s potential is realised and adoption becomes widespread. It is a problem that goes right to the core of blockchain and, as such, finding a solution has proved difficult, exposing tremendous divisions in the community.
The ‘scalability trilemma’ is a term coined by Ethereum co-founder Vitalik Buterin. It posits that blockchain systems can only have two out of the three following properties:
Decentralisation – where each participant in the system can access only O(c) resources Scalability – where the system can process O(n) > O(c) transactions Security – where the system can prevent an attack with up to O(n) resources.
“Bitcoin supports just seven transactions per second and Ethereum, 20 transactions per second… Visa can handle 24,000 transactions per second – 56,000 at its peak”
The two main blockchains, Bitcoin and Ethereum, both prioritise decentralisation and security at the expense of scalability. As a result, Bitcoin supports just seven transactions per second and Ethereum, 20 transactions per second. For point of reference, Visa can handle 24,000 transactions per second – 56,000 at its peak.
As a result, at the height of last year’s crypto-mania towards the end of 2017, there were numerous reports of Bitcoin transactions taking hours or even days to complete. High traffic also usually means high fees – with reports of BTC transactions that cost upwards of $100 commonplace.
If blockchain solutions are going to achieve anything like Visa’s adoption levels, such slow processing speed will severely hamper its efforts to reach critical mass. The cost and speed will ultimately keep many blockchain solutions on paper and prevent them reaching production. They will continue to do so if we do not see some kind of breakthrough.
Uncovering a Solution
The good news is that a number of excellent projects are being run by talented developers to solve the issue of scalability, working on a variety of different solutions. One way of helping blockchain scale is taking transactions ‘off chain’.
Blockstream core tech engineer, Christian Decker, is one man working on these solutions and his work has received much acclaim.
Christian has been involved in Bitcoin since 2009, and in 2012 he was offered a PhD candidate position in the Distributed Computing Group at ETH Zurich. The goal of his research was to improve the understanding of the underlying consensus mechanisms and to enable the network to scale with the increasing demands. The result was the world’s first PhD dissertation about Bitcoin and the creation of a number of protocols, including PeerCensus and Duplex Micropayment Channels.
Since then, Christian has played a key role in Blockstream’s growth, gaining recognition from the community by making the first full, secure Lightning payment on a non-test network. The company wrote of his achievement that it was also “the first Lightning payment on Litecoin, sending a microscopic payment not normally possible or economic on a blockchain, fully settled in a fraction of a second.”
“[Christian made] ‘the first Lightning payment on Litecoin, sending a microscopic payment not normally possible or economic on a blockchain, fully settled in a fraction of a second.’”
BDJ sat him down at the ‘Off the Chain’ master workshop, where we had more than 25 leading developers focused on solving blockchain’s scalability issues, to discuss how the technology was evolving, his work on off-chain solutions, and the role of governments and industry incumbents in the technology’s development.
Going Off the Chain
The idea behind off-chain solutions is that less valuable transaction activity can be processed off-chain (in separate, private channels) and ultimately settled on-chain at a later time. The theory is that as user adoption of this solution expands, more transactions will move off-chain, thus freeing up space on the main chain.
Christian argues that, “Off-chain solutions are definitely a part of the solution to the scalability problems we are having in blockchains. You see, blockchains are a really nice tool, but we need to add additional layers and additional solutions on top of it to actually reach the full potential of what was promised to us about 10 years ago now.”
And it is not just the scalability problems that they solve, argues Christian. “Off-chain solutions are actually a solution to not only the scalability, but also a number of other trade­offs as well – like the immediacy of payments and the fast iteration of protocol enhancements.
“Off-chain solutions are actually a solution to not only the scalability, but also a number of other trade­offs as well – like the immediacy of payments and the fast iteration of protocol enhancements.”
“I think off-chain solutions are both a way to extend out use cases as well as addressing part of the scalability solution. It’s probably not the only solution that we’re going to use, but it’s definitely a big part of it.”
The Lightning Network (LN) is probably the most famous main scaling solution to achieve this and it is an area Christian is familiar with, as him and Blockstream have worked closely alongside their team.
Despite only being in its beta testing stage, the Lightning Network experienced growth of 85% in July, reaching a current network capacity of 97.18 Bitcoin ($612,234.32), up from just 18 BTC – 24 BTC for most of June. Is this evidence that off-chain solutions are gaining traction?
Guaranteeing Security With Off-Chain Solutions
While there are advantages to off-chain solutions, they also add additional layers of complexity – something Christian acknowledges.
“We maintain the issue of needing to secure our private keys. We inherit the need to secure our data, and to take care of our own privacy,” he says. “We reduce some of these issues, but on the other hand we increase others.
“For example, the issue of needing to be watching the blockchain for any nefarious activities and be able to react in time, those are new issues that surface from off-chain protocols and that we add on top of the existing issues that we have with blockchains.
“On the other hand, we also have a number of solutions for the existing problems that we have on blockchains – like privacy, like the immediacy problem, like the ‘is this payment now confirmed or isn’t it?’ – because payments on off-chain channels are final right at the time when we finalise them and­ we don’t need to wait for confirmations.
“So, I think, on some parts we are adding complexity, but on the other hand we are also addressing quite a few of the issues that we have with just on-chain protocols.”
The Dangers of Hype
For blockchain to reach its full potential, it is not just scalability issues that need to be dealt with. “Blockchain currently has a number of issues that we need to address sooner rather than later,” says Christian. “And those are both on the technical side as well as the educational side.
“There is a lot of hype in the ecosystem, which may endanger quite a few of the users (­hinting at ICOs of course). That’s one of the educational issues that we need to address sooner rather than later.”
ICOs have become a problem for the blockchain community. The lack of oversight and the proliferance of scams has caused many to become skeptical of ICOs, and such a reputation is tarnishing the entire ecosystem. The need to hype up the solutions being touted in order to drum up investment also brings unrealistic expectations, which will eventually lead to disillusionment.
This hype also leads people to apply the technology where it is not really needed, which, when it doesn’t work, again drives disillusionment.
“On the technical side, I think there’s a lot of applications that are being pitched that are not really that applicable to blockchains”
“On the technical side, I think there’s a lot of applications that are being pitched that are not really that applicable to blockchains,” notes Christian. “So I think that while blockchains are a really useful tool, I think we still need to hone in on a set of use cases that are really sensible to build on a blockchain. And then to cut out everything else that is better suited for other systems.”
The Role of Governments
Another issue is the role of governments, who have traditionally been slow when it comes to their response to emerging technologies. With a decentralised technology that many believe has the potential to put power back in the hands of the people, what is the role of governments?
Christian believes that the government has a major role to play in all of this.
“On the one side,” he says, “it’s encouraging to see that various sandbox systems are being created, where this innovation and this evolution can take place. And on the other side, I think there should also be a focus on consumer protection, especially when it comes to these overhyped and just scammy projects that are being pitched.
“We should definitely allow for innovation to happen, but we shouldn’t expose people to the risks that are involved when trading cryptocurrencies or ICOs or tokens. Governments, so far, have done a really good job at it – maybe being a bit on the more lenient side, but I definitely welcome some open discussions with regulators.
“I’m actually also advising the Swiss government on regulation, so hopefully we can steer it in a direction that is both beneficial in terms of innovation, but also beneficial in terms of consumer protection.”
Is Crypto a Threat to Incumbent Financial Institutions?
Many believe that financial institutions, like governments, should fear blockchain’s transformative potential. Christian acknowledges that “there is this general vision of cryptocurrencies as the enemy of banks, and that banks either need to adapt or they will die.”
“Just because the underlying infrastructure changes, it doesn’t make the whole business model of banks obsolete”
However, he does not believe such a view is necessarily correct. “Just because the underlying infrastructure changes, it doesn’t make the whole business model of banks obsolete,” he argues. “Very few banks are actually in the business of transferring money or facilitating money transfers. It’s more that they build services on top of it. So, the big banks are all about putting investors and investees in contact and sort of negotiating the investment, rather than trying to get money from A to B.
“And whether we talk about cryptocurrencies or US dollars, I don’t see much of a difference. They will get rid of the pesky little problem of having to be a money transmitter in order to build these services on top.
“At the same time though, they might also get some new competition from this, because suddenly you have a much easier system to build these services on top of, and you lose a bit of the exclusivity of being a bank. So, I think us, as consumers, will definitely gain from it. Banks will be able to monetise on this as well, but they might need to innovate in order to do so.”
The Future
So, given the issues with scalability and education, how long before the general public actively engages with blockchain? “That’s a hard question,” says Christian. “It will definitely take a long, long time before we see grandmothers and granddads actively using cryptocurrencies or crypto systems in general.
“I think it’s going to be a gradual process. We see a lot of activity, currently, both with Lightning and Bitcoin, from the community. But it has not yet reached the level where it is self-­sustaining.
“We’re still sort of in the ‘chicken and egg’ phase where, for us to become useful, my favourite kiosk needs to accept it, and for them to become useful I need to have cryptocurrency in my pocket so that they can actually pay for the costly infrastructure to run it.
“What we can definitely do is reduce this onboarding cost and make it easier for users to actually use the technology. I think we’ll get there eventually and actually make it easy for everybody to use”
“What we can definitely do is reduce this onboarding cost and make it easier for users to actually use the technology. I think we’ll get there eventually and actually make it easy for everybody to use.
“And that’s the first step. Whether that then succeeds or not... that’s basically markets at play and I’m really bad at predicting markets.”
submitted by jvndn101 to binarydistrict [link] [comments]

Agreement with Satoshi – On the Formalization of Nakamoto Consensus

Cryptology ePrint Archive: Report 2018/400
Date: 2018-05-01
Author(s): Nicholas Stifter, Aljosha Judmayer, Philipp Schindler, Alexei Zamyatin, Edgar Weippl

Link to Paper


Abstract
The term Nakamoto consensus is generally used to refer to Bitcoin's novel consensus mechanism, by which agreement on its underlying transaction ledger is reached. It is argued that this agreement protocol represents the core innovation behind Bitcoin, because it promises to facilitate the decentralization of trusted third parties. Specifically, Nakamoto consensus seeks to enable mutually distrusting entities with weak pseudonymous identities to reach eventual agreement while the set of participants may change over time. When the Bitcoin white paper was published in late 2008, it lacked a formal analysis of the protocol and the guarantees it claimed to provide. It would take the scientific community several years before first steps towards such a formalization of the Bitcoin protocol and Nakamoto consensus were presented. However, since then the number of works addressing this topic has grown substantially, providing many new and valuable insights. Herein, we present a coherent picture of advancements towards the formalization of Nakamoto consensus, as well as a contextualization in respect to previous research on the agreement problem and fault tolerant distributed computing. Thereby, we outline how Bitcoin's consensus mechanism sets itself apart from previous approaches and where it can provide new impulses and directions to the scientific community. Understanding the core properties and characteristics of Nakamoto consensus is of key importance, not only for assessing the security and reliability of various blockchain systems that are based on the fundamentals of this scheme, but also for designing future systems that aim to fulfill comparable goals.

References
[AAC+05] Amitanand S Aiyer, Lorenzo Alvisi, Allen Clement, Mike Dahlin, Jean-Philippe Martin, and Carl Porth. Bar fault tolerance for cooperative services. In ACM SIGOPS operating systems review, volume 39, pages 45–58. ACM, 2005.
[ABSFG08] Eduardo A Alchieri, Alysson Neves Bessani, Joni Silva Fraga, and Fab´ıola Greve. Byzantine consensus with unknown participants. In Proceedings of the 12th International Conference on Principles of Distributed Systems, pages 22–40. SpringerVerlag, 2008.
[AFJ06] Dana Angluin, Michael J Fischer, and Hong Jiang. Stabilizing consensus in mobile networks. In Distributed Computing in Sensor Systems, pages 37–50. Springer, 2006.
[AJK05] James Aspnes, Collin Jackson, and Arvind Krishnamurthy. Exposing computationally-challenged byzantine impostors. Department of Computer Science, Yale University, New Haven, CT, Tech. Rep, 2005.
[AMN+16] Ittai Abraham, Dahlia Malkhi, Kartik Nayak, Ling Ren, and Alexander Spiegelman. Solidus: An incentive-compatible cryptocurrency based on permissionless byzantine consensus. https://arxiv.org/abs/1612.02916, Dec 2016. Accessed: 2017-02-06.
[AS98] Yair Amir and Jonathan Stanton. The spread wide area group communication system. Technical report, TR CNDS-98-4, The Center for Networking and Distributed Systems, The Johns Hopkins University, 1998.
[Bag00] Walter Bagehot. The english constitution, volume 3. Kegan Paul, Trench, Trubner, 1900. ¨
[Ban98] Bela Ban. Design and implementation of a reliable group communication toolkit for java, 1998.
[BBRTP07] Roberto Baldoni, Marin Bertier, Michel Raynal, and Sara Tucci-Piergiovanni. Looking for a definition of dynamic distributed systems. In International Conference on Parallel Computing Technologies, pages 1–14. Springer, 2007.
[Bit] Bitcoin community. Bitcoin-core source code. https://github.com/bitcoin/bitcoin. Accessed: 2015-06-30.
[BJ87] Ken Birman and Thomas Joseph. Exploiting virtual synchrony in distributed systems. volume 21. ACM, 1987.
[BMC+15] Joseph Bonneau, Andrew Miller, Jeremy Clark, Arvind Narayanan, Joshua A Kroll, and Edward W Felten. Sok: Research perspectives and challenges for bitcoin and cryptocurrencies. In IEEE Symposium on Security and Privacy, 2015.
[BO83] Michael Ben-Or. Another advantage of free choice (extended abstract): Completely asynchronous agreement protocols. In Proceedings of the second annual ACM symposium on Principles of distributed computing, pages 27–30. ACM, 1983.
[BPS16a] Iddo Bentov, Rafael Pass, and Elaine Shi. The sleepy model of consensus. https://eprint.iacr.org/2016/918.pdf, 2016. Accessed: 2016-11-08.
[BPS16b] Iddo Bentov, Rafael Pass, and Elaine Shi. Snow white: Provably secure proofs of stake. https://eprint.iacr.org/2016/919.pdf, 2016. Accessed: 2016-11-08.
[BR09] Franc¸ois Bonnet and Michel Raynal. The price of anonymity: Optimal consensus despite asynchrony, crash and anonymity. In Proceedings of the 23rd international conference on Distributed computing, pages 341–355. Springer-Verlag, 2009.
[Bre00] EA Brewer. Towards robust distributed systems. abstract. In Proceedings of the Nineteenth Annual ACM Symposium on Principles of Distributed Computing, page 7, 2000.
[BSAB+17] Shehar Bano, Alberto Sonnino, Mustafa Al-Bassam, Sarah Azouvi, Patrick McCorry, Sarah Meiklejohn, and George Danezis. Consensus in the age of blockchains. arXiv:1711.03936, 2017. Accessed:2017-12-11.
[BT16] Zohir Bouzid and Corentin Travers. Anonymity-preserving failure detectors. In International Symposium on Distributed Computing, pages 173–186. Springer, 2016.
[Can00] Ran Canetti. Security and composition of multiparty cryptographic protocols. Journal of CRYPTOLOGY, 13(1):143–202, 2000.
[Can01] Ran Canetti. Universally composable security: A new paradigm for cryptographic protocols. In Foundations of Computer Science, 2001. Proceedings. 42nd IEEE Symposium on, pages 136–145. IEEE, 2001.
[CFN90] David Chaum, Amos Fiat, and Moni Naor. Untraceable electronic cash. In Proceedings on Advances in cryptology, pages 319–327. Springer-Verlag New York, Inc., 1990.
[CGR07] Tushar D Chandra, Robert Griesemer, and Joshua Redstone. Paxos made live: an engineering perspective. In Proceedings of the twenty-sixth annual ACM symposium on Principles of distributed computing, pages 398–407. ACM, 2007.
[CGR11] Christian Cachin, Rachid Guerraoui, and Luis Rodrigues. Introduction to reliable and secure distributed programming. Springer Science & Business Media, 2011.
[CKS00] Christian Cachin, Klaus Kursawe, and Victor Shoup. Random oracles in constantinople: Practical asynchronous byzantine agreement using cryptography. In Proceedings of the nineteenth annual ACM symposium on Principles of distributed computing, pages 123–132. ACM, 2000.
[CL+99] Miguel Castro, Barbara Liskov, et al. Practical byzantine fault tolerance. In OSDI, volume 99, pages 173–186, 1999.
[CL02] Miguel Castro and Barbara Liskov. Practical byzantine fault tolerance and proactive recovery. ACM Transactions on Computer Systems (TOCS), 20(4):398–461, 2002.
[CNV04] Miguel Correia, Nuno Ferreira Neves, and Paulo Verissimo. How to tolerate half less one byzantine nodes in practical distributed systems. In Reliable Distributed Systems, 2004. Proceedings of the 23rd IEEE International Symposium on, pages 174–183. IEEE, 2004.
[Coo09] J. L. Coolidge. The gambler’s ruin. Annals of Mathematics, 10(4):181–192, 1909.
[Cri91] Flaviu Cristian. Reaching agreement on processor-group membrship in synchronous distributed systems. Distributed Computing, 4(4):175–187, 1991.
[CT96] Tushar Deepak Chandra and Sam Toueg. Unreliable failure detectors for reliable distributed systems. volume 43, pages 225–267. ACM, 1996.
[CV17] Christian Cachin and Marko Vukolic. Blockchain con- ´sensus protocols in the wild. arXiv:1707.01873, 2017. Accessed:2017-09-26.
[CVL10] Miguel Correia, Giuliana S Veronese, and Lau Cheuk Lung. Asynchronous byzantine consensus with 2f+ 1 processes. In Proceedings of the 2010 ACM symposium on applied computing, pages 475–480. ACM, 2010.
[CVNV11] Miguel Correia, Giuliana Santos Veronese, Nuno Ferreira Neves, and Paulo Verissimo. Byzantine consensus in asynchronous message-passing systems: a survey. volume 2, pages 141–161. Inderscience Publishers, 2011.
[CWA+09] Allen Clement, Edmund L Wong, Lorenzo Alvisi, Michael Dahlin, and Mirco Marchetti. Making byzantine fault tolerant systems tolerate byzantine faults. In NSDI, volume 9, pages 153–168, 2009.
[DDS87] Danny Dolev, Cynthia Dwork, and Larry Stockmeyer. On the minimal synchronism needed for distributed consensus. volume 34, pages 77–97. ACM, 1987.
[Dei] Wei Dei. b-money. http://www.weidai.com/bmoney.txt. Accessed on 03/03/2017.
[DGFGK10] Carole Delporte-Gallet, Hugues Fauconnier, Rachid Guerraoui, and Anne-Marie Kermarrec. Brief announcement: Byzantine agreement with homonyms. In Proceedings of the twentysecond annual ACM symposium on Parallelism in algorithms and architectures, pages 74–75. ACM, 2010.
[DGG02] Assia Doudou, Benoˆıt Garbinato, and Rachid Guerraoui. Encapsulating failure detection: From crash to byzantine failures. In International Conference on Reliable Software Technologies, pages 24–50. Springer, 2002.
[DGKR17] Bernardo David, Peter Gazi, Aggelos Kiayias, and Alexan- ˇder Russell. Ouroboros praos: An adaptively-secure, semisynchronous proof-of-stake protocol. Cryptology ePrint Archive, Report 2017/573, 2017. Accessed: 2017-06-29.
[DLP+86] Danny Dolev, Nancy A Lynch, Shlomit S Pinter, Eugene W Stark, and William E Weihl. Reaching approximate agreement in the presence of faults. volume 33, pages 499–516. ACM, 1986.
[DLS88] Cynthia Dwork, Nancy Lynch, and Larry Stockmeyer. Consensus in the presence of partial synchrony. volume 35, pages 288–323. ACM, 1988.
[DN92] Cynthia Dwork and Moni Naor. Pricing via processing or combatting junk mail. In Annual International Cryptology Conference, pages 139–147. Springer, 1992.
[Dol81] Danny Dolev. Unanimity in an unknown and unreliable environment. In Foundations of Computer Science, 1981. SFCS’81. 22nd Annual Symposium on, pages 159–168. IEEE, 1981.
[Dou02] John R Douceur. The sybil attack. In International Workshop on Peer-to-Peer Systems, pages 251–260. Springer, 2002.
[DSU04] Xavier Defago, Andr ´ e Schiper, and P ´ eter Urb ´ an. Total order ´ broadcast and multicast algorithms: Taxonomy and survey. ACM Computing Surveys (CSUR), 36(4):372–421, 2004.
[DW13] Christian Decker and Roger Wattenhofer. Information propagation in the bitcoin network. In Peer-to-Peer Computing (P2P), 2013 IEEE Thirteenth International Conference on, pages 1–10. IEEE, 2013.
[EGSvR16] Ittay Eyal, Adem Efe Gencer, Emin Gun Sirer, and Robbert van Renesse. Bitcoin-ng: A scalable blockchain protocol. In 13th USENIX Security Symposium on Networked Systems Design and Implementation (NSDI’16). USENIX Association, Mar 2016.
[ES14] Ittay Eyal and Emin Gun Sirer. Majority is not enough: Bitcoin ¨ mining is vulnerable. In Financial Cryptography and Data Security, pages 436–454. Springer, 2014.
[Fin04] Hal Finney. Reusable proofs of work (rpow). http://web.archive.org/web/20071222072154/http://rpow.net/, 2004. Accessed: 2016-04-31.
[Fis83] Michael J Fischer. The consensus problem in unreliable distributed systems (a brief survey). In International Conference on Fundamentals of Computation Theory, pages 127–140. Springer, 1983.
[FL82] Michael J FISCHER and Nancy A LYNCH. A lower bound for the time to assure interactive consistency. volume 14, Jun 1982.
[FLP85] Michael J Fischer, Nancy A Lynch, and Michael S Paterson. Impossibility of distributed consensus with one faulty process. volume 32, pages 374–382. ACM, 1985.
[Fuz08] Rachele Fuzzati. A formal approach to fault tolerant distributed consensus. PhD thesis, EPFL, 2008.
[GHM+17] Yossi Gilad, Rotem Hemo, Silvio Micali, Georgios Vlachos, and Nickolai Zeldovich. Algorand: Scaling byzantine agreements for cryptocurrencies. Cryptology ePrint Archive, Report 2017/454, 2017. Accessed: 2017-06-29.
[GKL15] Juan Garay, Aggelos Kiayias, and Nikos Leonardos. The bitcoin backbone protocol: Analysis and applications. In Advances in Cryptology-EUROCRYPT 2015, pages 281–310. Springer, 2015.
[GKL16] Juan A. Garay, Aggelos Kiayias, and Nikos Leonardos. The bitcoin backbone protocol with chains of variable difficulty. http://eprint.iacr.org/2016/1048.pdf, 2016. Accessed: 2017-02-06.
[GKP17] Juan A. Garay, Aggelos Kiayias, and Giorgos Panagiotakos. Proofs of work for blockchain protocols. Cryptology ePrint Archive, Report 2017/775, 2017. http://eprint.iacr.org/2017/775.
[GKQV10] Rachid Guerraoui, Nikola Knezevi ˇ c, Vivien Qu ´ ema, and Marko ´ Vukolic. The next 700 bft protocols. In ´ Proceedings of the 5th European conference on Computer systems, pages 363–376. ACM, 2010.
[GKTZ12] Adam Groce, Jonathan Katz, Aishwarya Thiruvengadam, and Vassilis Zikas. Byzantine agreement with a rational adversary. pages 561–572. Springer, 2012.
[GKW+16] Arthur Gervais, Ghassan O Karame, Karl Wust, Vasileios ¨ Glykantzis, Hubert Ritzdorf, and Srdjan Capkun. On the security and performance of proof of work blockchains. https://eprint.iacr.org/2016/555.pdf, 2016. Accessed: 2016-08-10.
[GL02] Seth Gilbert and Nancy Lynch. Brewer’s conjecture and the feasibility of consistent, available, partition-tolerant web services. volume 33, pages 51–59. ACM, 2002.
[GRKC15] Arthur Gervais, Hubert Ritzdorf, Ghassan O Karame, and Srdjan Capkun. Tampering with the delivery of blocks and transactions in bitcoin. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pages 692–705. ACM, 2015.
[Her88] Maurice P Herlihy. Impossibility and universality results for wait-free synchronization. In Proceedings of the seventh annual ACM Symposium on Principles of distributed computing, pages 276–290. ACM, 1988.
[Her91] Maurice Herlihy. Wait-free synchronization. ACM Transactions on Programming Languages and Systems (TOPLAS), 13(1):124–149, 1991.
[HKZG15] Ethan Heilman, Alison Kendler, Aviv Zohar, and Sharon Goldberg. Eclipse attacks on bitcoin’s peer-to-peer network. In 24th USENIX Security Symposium (USENIX Security 15), pages 129–144, 2015.
[Hoe07] Jaap-Henk Hoepman. Distributed double spending prevention. In Security Protocols Workshop, pages 152–165. Springer, 2007.
[HT94] Vassos Hadzilacos and Sam Toueg. A modular approach to fault-tolerant broadcasts and related problems. Cornell University Technical Report 94-1425, 1994.
[IT08] Hideaki Ishii and Roberto Tempo. Las vegas randomized algorithms in distributed consensus problems. In 2008 American Control Conference, pages 2579–2584. IEEE, 2008.
[JB99] Ari Juels and John G Brainard. Client puzzles: A cryptographic countermeasure against connection depletion attacks. In NDSS, volume 99, pages 151–165, 1999.
[KMMS01] Kim Potter Kihlstrom, Louise E Moser, and P Michael MelliarSmith. The securering group communication system. ACM Transactions on Information and System Security (TISSEC), 4(4):371–406, 2001.
[KMMS03] Kim Potter Kihlstrom, Louise E Moser, and P Michael MelliarSmith. Byzantine fault detectors for solving consensus. volume 46, pages 16–35. Br Computer Soc, 2003.
[KMTZ13] Jonathan Katz, Ueli Maurer, Bjorn Tackmann, and Vassilis ¨ Zikas. Universally composable synchronous computation. In TCC, volume 7785, pages 477–498. Springer, 2013.
[KP15] Aggelos Kiayias and Giorgos Panagiotakos. Speed-security tradeoff s in blockchain protocols. https://eprint.iacr.org/2015/1019.pdf, Oct 2015. Accessed: 2016-10-17.
[KP16] Aggelos Kiayias and Giorgos Panagiotakos. On trees, chains and fast transactions in the blockchain. http://eprint.iacr.org/2016/545.pdf, 2016. Accessed: 2017-02-06.
[KRDO16] Aggelos Kiayias, Alexander Russell, Bernardo David, and Roman Oliynykov. Ouroboros: A provably secure proof-of-stake blockchain protocol. https://pdfs.semanticscholar.org/1c14/549f7ba7d6a000d79a7d12255eb11113e6fa.pdf, 2016. Accessed: 2017-02-20.
[Lam84] Leslie Lamport. Using time instead of timeout for fault-tolerant distributed systems. volume 6, pages 254–280. ACM, 1984.
[Lam98] Leslie Lamport. The part-time parliament. volume 16, pages 133–169. ACM, 1998.
[LCW+06] Harry C Li, Allen Clement, Edmund L Wong, Jeff Napper, Indrajit Roy, Lorenzo Alvisi, and Michael Dahlin. Bar gossip. In Proceedings of the 7th symposium on Operating systems design and implementation, pages 191–204. USENIX Association, 2006.
[LSM06] Brian Neil Levine, Clay Shields, and N Boris Margolin. A survey of solutions to the sybil attack. University of Massachusetts Amherst, Amherst, MA, 7, 2006.
[LSP82] Leslie Lamport, Robert Shostak, and Marshall Pease. The byzantine generals problem. volume 4, pages 382–401. ACM, 1982.
[LSZ15] Yoad Lewenberg, Yonatan Sompolinsky, and Aviv Zohar. Inclusive block chain protocols. In Financial Cryptography and Data Security, pages 528–547. Springer, 2015.
[LTKS15] Loi Luu, Jason Teutsch, Raghav Kulkarni, and Prateek Saxena. Demystifying incentives in the consensus computer. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pages 706–719. ACM, 2015.
[Lyn96] Nancy A Lynch. Distributed algorithms. Morgan Kaufmann, 1996.
[Mic16] Silvio Micali. Algorand: The efficient and democratic ledger. http://arxiv.org/abs/1607.01341, 2016. Accessed: 2017-02-09.
[Mic17] Silvio Micali. Byzantine agreement, made trivial. https://people.csail.mit.edu/silvio/SelectedApr 2017. Accessed:2018-02-21.
[MJ14] A Miller and LaViola JJ. Anonymous byzantine consensus from moderately-hard puzzles: A model for bitcoin. https://socrates1024.s3.amazonaws.com/consensus.pdf, 2014. Accessed: 2016-03-09.
[MMRT03] Dahlia Malkhi, Michael Merritt, Michael K Reiter, and Gadi Taubenfeld. Objects shared by byzantine processes. volume 16, pages 37–48. Springer, 2003.
[MPR01] Hugo Miranda, Alexandre Pinto, and Luıs Rodrigues. Appia, a flexible protocol kernel supporting multiple coordinated channels. In Distributed Computing Systems, 2001. 21st International Conference on., pages 707–710. IEEE, 2001.
[MR97] Dahlia Malkhi and Michael Reiter. Unreliable intrusion detection in distributed computations. In Computer Security Foundations Workshop, 1997. Proceedings., 10th, pages 116–124. IEEE, 1997.
[MRT00] Achour Mostefaoui, Michel Raynal, and Fred´ eric Tronel. From ´ binary consensus to multivalued consensus in asynchronous message-passing systems. Information Processing Letters, 73(5-6):207–212, 2000.
[MXC+16] Andrew Miller, Yu Xia, Kyle Croman, Elaine Shi, and Dawn Song. The honey badger of bft protocols. https://eprint.iacr.org/2016/199.pdf, 2016. Accessed: 2017-01-10.
[Nak08a] Satoshi Nakamoto. Bitcoin: A peer-to-peer electronic cash system. https://bitcoin.org/bitcoin.pdf, Dec 2008. Accessed: 2015-07-01.
[Nak08b] Satoshi Nakamoto. Bitcoin p2p e-cash paper, 2008.
[Nar16] Narayanan, Arvind and Bonneau, Joseph and Felten, Edward and Miller, Andrew and Goldfeder, Steven. Bitcoin and cryptocurrency technologies. https://d28rh4a8wq0iu5.cloudfront.net/bitcointech/readings/princeton bitcoin book.pdf?a=1, 2016. Accessed: 2016-03-29.
[Nei94] Gil Neiger. Distributed consensus revisited. Information processing letters, 49(4):195–201, 1994.
[NG16] Christopher Natoli and Vincent Gramoli. The blockchain anomaly. In Network Computing and Applications (NCA), 2016 IEEE 15th International Symposium on, pages 310–317. IEEE, 2016.
[NKMS16] Kartik Nayak, Srijan Kumar, Andrew Miller, and Elaine Shi. Stubborn mining: Generalizing selfish mining and combining with an eclipse attack. In 1st IEEE European Symposium on Security and Privacy, 2016. IEEE, 2016.
[PS16a] Rafael Pass and Elaine Shi. Fruitchains: A fair blockchain. http://eprint.iacr.org/2016/916.pdf, 2016. Accessed: 2016-11-08.
[PS16b] Rafael Pass and Elaine Shi. Hybrid consensus: Scalable permissionless consensus. https://eprint.iacr.org/2016/917.pdf, Sep 2016. Accessed: 2016-10-17.
[PS17] Rafael Pass and Elaine Shi. Thunderella: Blockchains with optimistic instant confirmation. Cryptology ePrint Archive, Report 2017/913, 2017. Accessed:2017-09-26.
[PSL80] Marshall Pease, Robert Shostak, and Leslie Lamport. Reaching agreement in the presence of faults. volume 27, pages 228–234. ACM, 1980.
[PSs16] Rafael Pass, Lior Seeman, and abhi shelat. Analysis of the blockchain protocol in asynchronous networks. http://eprint.iacr.org/2016/454.pdf, 2016. Accessed: 2016-08-01.
[Rab83] Michael O Rabin. Randomized byzantine generals. In Foundations of Computer Science, 1983., 24th Annual Symposium on, pages 403–409. IEEE, 1983.
[Rei96] Michael K Reiter. A secure group membership protocol. volume 22, page 31, 1996.
[Ric93] Aleta M Ricciardi. The group membership problem in asynchronous systems. PhD thesis, Cornell University, 1993.
[Ros14] M. Rosenfeld. Analysis of hashrate-based double spending. http://arxiv.org/abs/1402.2009, 2014. Accessed: 2016-03-09.
[RSW96] Ronald L Rivest, Adi Shamir, and David A Wagner. Time-lock puzzles and timed-release crypto. 1996.
[Sch90] Fred B Schneider. Implementing fault-tolerant services using the state machine approach: A tutorial. volume 22, pages 299–319. ACM, 1990.
[SLZ16] Yonatan Sompolinsky, Yoad Lewenberg, and Aviv Zohar. Spectre: A fast and scalable cryptocurrency protocol. Cryptology ePrint Archive, Report 2016/1159, 2016. Accessed: 2017-02-20.
[SSZ15] Ayelet Sapirshtein, Yonatan Sompolinsky, and Aviv Zohar. Optimal selfish mining strategies in bitcoin. http://arxiv.org/pdf/1507.06183.pdf, 2015. Accessed: 2016-08-22.
[SW16] David Stolz and Roger Wattenhofer. Byzantine agreement with median validity. In LIPIcs-Leibniz International Proceedings in Informatics, volume 46. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2016.
[Swa15] Tim Swanson. Consensus-as-a-service: a brief report on the emergence of permissioned, distributed ledger systems. http://www.ofnumbers.com/wp-content/uploads/2015/04/Permissioned-distributed-ledgers.pdf, Apr 2015. Accessed: 2017-10-03.
[SZ13] Yonatan Sompolinsky and Aviv Zohar. Accelerating bitcoin’s transaction processing. fast money grows on trees, not chains, 2013.
[SZ16] Yonatan Sompolinsky and Aviv Zohar. Bitcoin’s security model revisited. http://arxiv.org/pdf/1605.09193, 2016. Accessed: 2016-07-04.
[Sza14] Nick Szabo. The dawn of trustworthy computing. http://unenumerated.blogspot.co.at/2014/12/the-dawn-of-trustworthy-computing.html, 2014. Accessed: 2017-12-01.
[TS16] Florian Tschorsch and Bjorn Scheuermann. Bitcoin and ¨ beyond: A technical survey on decentralized digital currencies. In IEEE Communications Surveys Tutorials, volume PP, pages 1–1, 2016.
[VCB+13] Giuliana Santos Veronese, Miguel Correia, Alysson Neves Bessani, Lau Cheuk Lung, and Paulo Verissimo. Efficient byzantine fault-tolerance. volume 62, pages 16–30. IEEE, 2013.
[Ver03] Paulo Ver´ıssimo. Uncertainty and predictability: Can they be reconciled? In Future Directions in Distributed Computing, pages 108–113. Springer, 2003.
[Vuk15] Marko Vukolic. The quest for scalable blockchain fabric: ´ Proof-of-work vs. bft replication. In International Workshop on Open Problems in Network Security, pages 112–125. Springer, 2015.
[Vuk16] Marko Vukolic. Eventually returning to strong consistency. https://pdfs.semanticscholar.org/a6a1/b70305b27c556aac779fb65429db9c2e1ef2.pdf, 2016. Accessed: 2016-08-10.
[XWS+17] Xiwei Xu, Ingo Weber, Mark Staples, Liming Zhu, Jan Bosch, Len Bass, Cesare Pautasso, and Paul Rimba. A taxonomy of blockchain-based systems for architecture design. In Software Architecture (ICSA), 2017 IEEE International Conference on , pages 243–252. IEEE, 2017.
[YHKC+16] Jesse Yli-Huumo, Deokyoon Ko, Sujin Choi, Sooyong Park, and Kari Smolander. Where is current research on blockchain technology? – a systematic review. volume 11, page e0163477. Public Library of Science, 2016.
[ZP17] Ren Zhang and Bart Preneel. On the necessity of a prescribed block validity consensus: Analyzing bitcoin unlimited mining protocol. http://eprint.iacr.org/2017/686, 2017. Accessed: 2017-07-20.
submitted by dj-gutz to myrXiv [link] [comments]

RightMesh AMA Answers

Thank you for your interest in our project and for submitting questions over the past week for our first AMA!
 
Please see below for our answers. Question thread available here. If you would like further clarification on any of the below, please join our Telegram channel to speak directly with the team.
 
The RightMesh Team
 
 
 

I like you guys the most because you're a BCORP with a great purpose, but what does your organization do better than the competition? Thank you.

 
Thank you for your kind words about our B Corp status, it’s something we pride ourselves on at Left and RightMesh! For those who are not familiar, Left, the parent company of RightMesh, is a certified B-Corp and has won numerous awards for community engagement and corporate culture. B Corps are for-profit companies certified by the non-profit B Lab to meet rigorous standards of social and environmental performance, accountability, and transparency. As a certified B Corp, Left is committed to doing business “right” – for the good of all. There are over 2,400 B Corps in over 50 countries, covering 130 industries. Some notable B Corps include Ben & Jerry’s, Warby Parker, Patagonia, Etsy, Plum Organics, and of course, Left!
 
We believe there are several differentiating factors about RightMesh, spanning from our organization to our technology. These include:
 
 

Culture & Values:

 
Left’s founders, Chris Jensen and John Lyotier, had a dream to create a company built on core values and an anything-is-possible attitude that can make this planet a better place. We have been recognized as the “Best Employer in BC (British Columbia, Canada)” by Small Business BC, and we are a two-time winner of the BC Tech Community Engagement Award. All employees get to participate in our “Dream Program” in which the company supports us to fulfil our personal dreams and ambitions, and we are given unlimited work hours for volunteering in our community.
 

Team Expertise:

 
The RightMesh team consists of over 100 PhDs, Scientists, Developers, Entrepreneurs, Business Strategists and other experts who have in-depth expertise in Mesh technologies, blockchain and building successful businesses.
 
RightMesh has offices in Vancouver, Canada and Khulna, Bangladesh. We also have project contributors and partners working from Zug, Switzerland and Los Angeles, United States.
 
A key differentiating factor is the fact that our team has strong experience in scaling teams which will be extremely important to the success of RightMesh in the future following our TGE.
 

Executive Team Overview

 
John Lyotier, Co-founder and CEO  
Co-Founder & CEO, RightMesh. John is one of the co-founders and is a key contributor to the global strategy, vision, and technology roadmap for RightMesh, its parent company Left, and all its subsidiary brands. John is an entrepreneur and a successful marketer with more than 20 years of experience in promoting, launching, designing, and jumpstarting new businesses and products through innovative marketing concepts. Under his leadership, the parent company, Left, has gained a national reputation as being a “Best Workplace” award winner while being the first back-to-back recipient of the BC Tech Association’s Tech Impact Award for Community Engagement, recognizing the best company in BC for balancing “Work, Life, and Play”. With RightMesh, he is focused on bringing connectivity to the next billion.
 
Chris Jensen, Co-founder and COO  
Chris began his career in the UK working for multinationals and banks and continued in the banking and brokerage industry upon moving to Canada. He has a strong understanding of the finance markets and has lived the pain of raising capital for early stage companies during the beginning stages of growth, from 25 to 80+ employees. He has founded several start-up companies in his career. In his role as CEO for Left and COO at RightMesh, Chris thrives on understanding the big picture and on moving the levers that drive the company forward. This includes financing, strategic partnerships, and corporate development. Chris holds a BSc (Honours) in Economics and History from Queen Mary University of London.
 
Dr. Jason Ernst, CTO and Chief Networking Scientist  
Jason holds a PhD in the field of Mesh Networking and Heterogeneous Wireless Networks as well as a M.Sc. on Scheduling Techniques for Wireless Mesh Networks, both from the Applied Computing faculty at the University of Guelph. An adjunct professor at the University of Guelph, Jason has more than 30 published papers on wireless networks, cognitive agents, FPGAs, and soft-computing topics and has presented his research at international conferences around the world. Jason is the only Canadian member of the ACM Future of Computing Academy and a member of their executive committee. Prior to joining Left, Jason was the CTO of Redtree Robotics, which designed robots that made use of multiple radio technologies to ensure pervasive connectivity to each other and their operators.
 
Dr. David Wang, Applied Research Engineering Scientist  
Dr. Zehua Wang is the Chief Micropayment Scientist at RightMesh. He received Ph.D. degree from the University of British Columbia (UBC), Vancouver, Canada. He received his master and bachelor degrees in Computer Engineering and Software Engineering, respectively. He holds a research fellow position in UBC. He has published more than 30 peer-reviewed book chapters and papers in topics of mobile ad-hoc networks, blockchain technology, the Internet of Things, and the fifth-generation wireless networks. He has expertise of using optimization and game theories to solve economic problems. He was a recipient of Four-Year-Fellowship and awarded the Graduate Support Initiative Award at UBC. In industry, he has about 10 years experiences of software development. In academia, he served as the technical program committee (TPC) Co-chair of IEEE International Workshop in Smart Multimedia and TPC members in many international conferences, including IEEE ICC, IEEE Globecom, and IEEE VTC, etc. He is a member of IEEE.
 
Saju Abraham, Chief Product Officer  
Saju is a seasoned professional in the realm of mobile and wireless technologies having worked with customers, partners and teams across 19 countries in organizations such as Lucent Technologies, Movius, NEC, OnMobile and Telefónica. His passion for building great products stemmed from his multifaceted experience as a software engineer, architect and product manager, and he currently thrives in bringing multiple cross-functional and cross-cultural teams together to cohesively execute the product strategy for RightMesh. His credentials include a Bachelor’s degree in Computer Science and Engineering and a Postgraduate degree in Management from the Indian Institute of Management, Bangalore.
 
Melissa Quinn, Corporate Development Manager  
Melissa’s passion to empower people to be their best selves is why she has immersed herself in the blockchain, cryptocurrency, and mesh technology world. Heading up Corporate Development for RightMesh, Melissa works closely with the team while constantly seeking Partners, Advisors, and other game changers who are aligned with our vision. She has a BBA from SFU, a background in HR, and a strong desire to put innovative technology at the forefront of doing business as a force for good.
 
Rakib Islam, Co-Founder and CTO of Left  
In his role as CTO, Rakib sets the pace for Left’s application development initiatives, including key recruitment of engineering and mobile technologists. Rakib leads Left Technologies Pty Ltd, Left’s ISO-9000 certified subsidiary in Bangladesh. An active member of BASIS (Bangladesh Association for Software and Information Services), he frequently travels abroad to present an example of the ‘new’ Bangladesh and speak about economic empowerment. Rakib’s credentials include a Master’s Degree in Computer Science and Applications from Pune University, India, as well as being a participant in the US Department of State Professional Fellows Program for Young Entrepreneurs at the University of Oklahoma.
 
Tracy McDonald, Director, Talent & Culture  
With over 10 years working with people to grow their potential, Tracy is passionate about creating dynamic teams that facilitate business growth and positive culture. As an early Lefty, she was instrumental in scaling up the team to over 80 people, without losing the culture that makes Left special and unique. Tracy’s coaching and development work with the Lefties has been recognized with many awards including “Best Workplace in BC” and Community Engagement Winner from the BC Tech Association. Her dedication to making Left a premier workplace, was further recognized when Left became a certified B Corporation. Tracy’s belief in the potential of people allows her to lead with compassion, integrity, and trust. She earned her Bachelor of Science from Simon Fraser University.
 
Dana Harvey, Chief Communications Officer  
Dana harnesses the power of words and technology to engage audiences and compel them to action. As a communications professional with 25+ years’ experience in global markets, Dana combines strong strategic skills with out-of-the-box thinking and the unique ability to craft omnichannel content that resonates and inspires. She has helped large corporations like Nortel, Motorola and IBM develop new markets, managed an international advertising agency, and guided multiple businesses to success through her own communications consultancy. Dana is also an experienced public speaker, passionate about sharing her knowledge and motivating audiences. As an advocate for the full participation of women in all communities, she is especially interested in exploring the positive social and economic impacts RightMesh will bring to women in developing nations and around the world. Dana is co-founder of the Women’s Collaborative Hub, an organization that empowers youth and women from diverse backgrounds. Her credentials include a BA (Honours) in Communications and a Post Baccalaureate Masters (Dean’s List) in Asian Management.
 
Alyse Killeen, Executive Strategist  
Alyse is Managing Partner of StillMark Co. and StillMark Capital, and is one of the very first traditional venture investors to participate as an investor and advisor in the blockchain and cryptocurrency ecosystems. In 2015, the UN Foundation named her a Top 70 Bay Area Digital Leader, and in 2016, Singapore University of Social Sciences (SUSS), a university under the ambit of Singapore’s national Ministry of Education, appointed Alyse as a Fintech Fellow. In 2017, International Business Times (IBT) recognized Alyse’s contribution to the development of the blockchain ecosystem by including her in the 4th position of IBT’s “VCs Powering the Blockchain Boom” List, following Tim Draper, Mark Cuban, and Naval Ravikant of AngelList and MetaStable. Alyse has presented internationally, been featured in many reputable publications, authored a book chapter in the award-winning Handbook of Digital Currency titled “The Confluence of Bitcoin and the Global Sharing Economy”, and in 2017 contributed to the next book in the series, Handbook of Blockchain, Digital Finance, and Inclusion (2017), co-authoring “Global Financial Institutions 2.0” with Dr. R. Chan of the World Bank. In her role as Executive Strategist, Alyse consults with the executive team, including on the development of the team’s network within the blockchain community and introduction to ecosystem leaders.
 

Our Advisors:

 
Our advisory team consists of advisors who believe in the long lasting success of the project. They have been carefully selected to help built RightMesh over multiple years of operation and are not involved solely for the token generation event.
 
Our advisors include:
 
 

Academic Research:

 
Academic research has been core to the design and development of RightMesh thus far, and will continue to be a key driver for us in the future. RightMesh works closely with Universities on academic research on mesh networks, blockchain technology, and payment channels. We are working on research with the University of British Columbia on density simulation and payment channel development. Since early 2017, we’ve been conducting research on mesh networks and connectivity in Arctic / remote regions with:
 
 
We've received grants from NSERC, MITACS and CIRA to support pilot programs thus far and are submitting a MITACS cluster grant to support over 100 graduate student units over the next 3-5 years. This research covers everything from how to design relevant mesh apps in the communities the mesh is operating in, to performance evaluation of the network protocols, to scalability of micropayment channels.
 

Technology:

 
It is also important how the mesh is designed for scalability reasons. Most mesh networking solutions are built around a store and forward and broadcast mechanism. This mechanism is not scalable and congests the network causing complete breakdown of the network. Even a small amount of devices can quickly cause exponential traffic resulting in extremely high delay and low effective throughput for apps running on broadcast protocols. In the RightMesh network, devices directly communicate with another device, and make smart routing decisions along the way.
 
RightMesh implements autonomous role topology/mesh creation layer - which means devices in the RightMesh network will autonomously detect each other and connect - user intervention in the network role is minimized .
 

Other key tech differentiators include:

 
We don't broadcast data. We compute a route between devices. Our protocol was built to use multiple paths (most use a single path and have long recovery times on a broken connection). The RightMesh network protocols can failover, or use multiple paths at the same time. RightMesh doesn't require the phone to be rooted. RightMesh doesn't require extra hardware. RightMesh can share existing Wi-Fi or Cellular Data, many others can only share Cellular Data.
 
 

Partners & Affiliations:

 
 
Answer provided by the RightMesh Team
 
 

Hello, First, congratulations on the big idea! I'm definitely a supporter. (1/2) My question is how far are you into testing your mesh network?

 
Thank you! We’ve spent the last 1.5 years or so building the protocol stack from the ground up, and so most of the testing that has been done has been around testing the functionality of the stack - including node discovery, single-hop and multi-hop communication, multi-path routing, forming mesh networks with heterogeneous wireless links, and app integration.
 
And over time, we steadily have been improving our end-to-end reliable communications protocol. The protocol originally achieved somewhere on the order of a few kbps when we first started because we did e2e acks on every packet. We have since moved to sliding window and selective ack mechanism which has allowed the performance to climb closer to the Mbps range. However, we still have more work to do in order to achieve the theoretical maximums of the individual links (and even faster if combining links).
 
In terms of testing of the scale of a RightMesh network, we've tested with up to 10 hops on a single path, but can likely support more. Right now the largest offline mesh we've had is 30 devices, limited only by the number of devices we had available at that moment in time.
 
Building a performance evaluation framework is one of our next immediate and important tasks, where we can evaluate the performance of the network under various test conditions - for example how the network behaves based on density, and how does the number of hops impact the response time and data that flows through the network.
 

(2/2) Can I assume I'll only be able to participate if I'm in the surrounding locations? For example: Someone in Indonesia is using RightMesh to try and connect to the internet. Is there a possibility for me to help them if I live in a different country? Thank you and keep up the good work.

 
To be a participant in a RightMesh network, you will have to be in close vicinity with another RightMesh powered node (smartphone) in order to be connected to a network. However, it will be possible for community members to operate devices that provide a “superpeer” layer. These would be fixed nodes with stable, reliable, and ideally fast internet connections. They would provide relaying between different geographically separate meshes - for instance between two neighbourhoods that are too far apart for one mesh to cover them both. They would be required to provide tokens in order to facilitate the channels that need to be made between the buyers and sellers. This would allow them to charge a fee for having their tokens locked up in the channels.
 
We will also open source the superpeer, so people will be able to work off our reference superpeer implementation and build their own custom superpeers. This would let them control the strategy the superpeer uses to allocate tokens into channels. We expect to have a release of the superpeer which supports payment channels by next week. At this point in time the solution is proof-of-concept stage, but some testing has been done to support two meshes communicating with each other through a superpeer where the data seller in each mesh is compensated by buyers in each mesh.
 
Answers provided by Dr. Jason Ernst, CTO and Chief Networking Scientist & Saju Abraham, Chief Product Officer
 
 

What do you see as the biggest challenge with taking your technology to market and hitting your usegrowth targets?

 
Density is the biggest challenge of mesh technologies, and one of the reasons why token economies are required to incentivize users to share their signal when it is available.
 
We are looking to bring in users into the RightMesh ecosystem through the work they do in the network, and provide them economic incentives that will encourage further action. What defines work? Being a part of a relay node in the network for instance - that reduces barriers to entry. Or incentivizing users for taking actions in the app or to consume content such as ads. The more opportunities there are for users to earn, the more people that will join, the more developers that will join the ecosystem, leading to more opportunities, and the network effects loop should grow stronger.
 
Answer provided by Saju Abraham, Chief Product Officer & Aldrin D’souza, Product Manager
 
 

(1/3) What is the theoretical maximum mesh size?

 
There isn't really a theoretical limit. We don't have any hard caps on devices in our code, however locally there may be limitations from individual phones. For instance, I've seen some phones in hotspot mode which only support 6 clients connected to it. On other phones sometimes, as few as 3-4 BT connections. So there are some constraints on the topology and the maximum number of connections one device may have, but it is limited more by the devices, the chipset and Android, rather than our software. We can also get around some of these limitations still using our switching technology, however, this will have a noticeable impact on delay.
 

(2/3) Does the transfer rate for users slow as the mesh size increases?

 
This is less a function of the number of users, or devices, and more a function of the demand on the network. A network with many devices and few users actually requesting traffic may perform better than a small network where all of the users are requesting lots of traffic. There is some overhead in the protocol to maintain the connectivity of devices, however this will be minimal in impact compared to the load of traffic from all of the devices. It also depends on where the traffic is going. If it internal to the mesh it may be possible with a dense mesh that RightMesh could support high throughput internally. The bottlenecks would likely occur in cases where there is lots of traffic which requires the Internet, and there are too few people willing to sell or donate Internet data into the network. Compared to other meshes howevever because RightMesh can support multiple paths, we can split the load across all available Internet connections rather than doing something more naiive like rely only on the closest one, for example.
 

(3/3) How do you plan to test a large scale mesh prior to launch?

 
There is lots that we can do with simulation, or combining simulations with some real devices. We also have a large team in Bangladesh that can help support field tests in some very different environment that we are used to in Vancouver.
 
Further, we are working with researchers at UBC and Guelph so that graduate students can apply some of the latest research methods in simulation and performance evaluation to RightMesh. (I myself have a PhD that relied heavily in this area, and we have several other PhDs on the team who can provide expertise to graduate students in this area. We are also working with some other top researchers in this area who will help in ensure we are straining and breaking the network as much as possible before launch).
 
To be more specific, it will be a combination of stressing various components of the system one at a time, along with tests that stress all of the components at once. We are also building software that can automate various scenarios to test how the phones and the library can handle different topologies and connectivity. Before we consider it ready for launch however, we'll need some wide scale tests with real devices and real traffic. This will likely happen by working with friendly partners who believe in the benefits of what the mesh can provide in very localized applications (think a train schedule app in a crowded city for example). This will inevetiably result in parts of the protocol breaking, which will iteratively repair.
 
Once we are satisfied that the network as a whole can maintain stability, tokens properly account for the data being used (verified on the public testnets), and that users of these early partner apps are having a good user experience, we will deploy to the public network.
 
Answers provided by Dr. Jason Ernst, CTO and Chief Networking Scientist
 
 

Have you had direct interest from large enterprise clients wishing to use the mesh technology in their apps/content strategy as yet, or are you having to reach out to them to generate interest?

 
Yes, RightMesh has been receiving direct inquiries from major corporations and organizations every day. These companies are largely interested in reaching emerging markets and regions where connectivity is an issue, and has been inaccessible until now. Mesh technology, being so new, will enable new types of applications to emerge that have not previously been possible, so proof of concepts for both RightMesh and partners will be a key focus. We’re actively in discussion with companies who are interested in integrating RightMesh into mobile applications, dApps, IoT devices and other hardware products to develop pilot projects.
 
In addition to these inbound inquiries, we have an outbound strategy as well, where we’ve identified key verticals that would benefit from mesh enabled applications. In the near term, over the next year while we harden the RightMesh protocol, we plan to focus on working with partners who provide services like emergency communications, distance education, medical services, and messaging applications, to name a few.
 
We see the need to work with a variety of different types of partners from international NGOs to brand names in order to test various use cases (ex. emergency medical alerts or content distribution from content providers). Our partnership strategy will evolve over time as our protocol matures.
 
We will publish announcements as per our effective disclosure policy once anything is material.
 
If your organization is interested in discussing a partnership or collaboration with RightMesh, we'd love to hear from you! Please email us at [email protected].
 
Answer provided by Brianna MacNeil, Product Manager, Blockchain
 
 

First let me say this product is revolutionary, I know if availability is solved there is no reason not to use this. My question is regarding your choice of an erc20 token, wasn't it more suitable to choose something like IOTA for constant payment of internet access? Are you planning for the payments to be made every second per MB consumed or something like that? Thanks

 

Related question: How exactly to you intend to use microtransactions considering high Tx fees from the Ethereum network?

 
Thank you very much for your feedback!
 
First, for context, let’s explain why and how RightMesh is using blockchain technology. Firstly, the protocol is integrated with Ethereum to uniquely identify each node (smartphone/device) in the mesh network by assigning it a MeshID in a similar way that a MAC address is assigned an IP address. Secondly, participation in the network is incentivized through an ERC20 token, called RMESH, and the network uses a custom implementation of µRaiden to allow for micropayments of micro amounts of data in the network.
 
We are supporters of Ethereum and its strong development community. Scalability and reducing transaction fees are two of the biggest challenges that the Ethereum community is working on now. But, while that is happening, we have also been looking at our own protocol design to minimize the need of Ethereum transactions and tackle the problem of scalability.
 
Every microtransaction that occurs on a RightMesh network does not need to be secured on the blockchain - that is vastly inefficient. That’s why we’ve been relying on a payment channel design based on µRaiden that allow micro transactions to occur in the network between nodes without transaction fees, and not being dependant on the blockchain for every transaction. We think this has to be a joint community effort, and so we’ve published the work we’ve done in porting the µRaiden libraries to Java to be used in our Android libraries.
 
We also believe that being a part of the Ethereum community also means contributing to it and helping it to move forward.
 
We hope that the work we have been doing on µRaiden and porting the libraries to other languages - specifically Java so it could be used in Android applications - will benefit other projects who plan to use the Ethereum network for microtransactions: https://github.com/RightMesh/microraiden-java
 
Answer provided by Saju Abraham, Chief Product Officer
 
 

If Google/Alphabet succeed with Project Loon, will this damage RightMesh's market?

 
If Google’s Project Loon succeeds, it would be a win for everyone and the planet. The same goes for the SpaceX satellite initiatives, the OneWeb project, Facebook’s global internet initiatives, 5G networks, and the success of other mesh networking technologies in the blockchain space.
 
We each share the goal of bringing connectivity to the nearly 4 billion people who do not have access to internet and connectivity. At the end of the day, we, RightMesh, aim to lift millions out of poverty by providing them with access to the societal and economic benefits afforded by the internet and access to information. This is not something that can be solved by one entity. It will take the combination of different solutions and approaches to make this a reality.
 
One major strength of RightMesh is that we can solve last mile connectivity, which is incidentally complementary to many other projects in the space. There is a good opportunity for us to potentially collaborate with some forward-thinking wireless companies, MVNOs, and corporations working on global connectivity projects, to provide last mile delivery.
 
Answer provided by John Lyotier, CEO & Brianna MacNeil, Product Manager, Blockchain
submitted by BreezyZebra to RightMesh [link] [comments]

Bitcoin 2019: Scaling to the Masses Scaling Bitcoin  Nervos Meetup Bitcoin Scalability Workshops - Scaling Bitcoin 2019 ... Bitcoin Scalability Workshops - Scaling Bitcoin 2019 ... Bitcoin Scalability Workshops - Scaling Bitcoin 2019

Scaling Bitcoin 2017: Science Is Central in Stanford (and the Politics Ignored) Stanford University hosted the fourth edition of the Scaling Bitcoin conference over the weekend of November 45: Scaling Bitcoin 2017: Scaling the Edge . The annual conference, sometimes referred to as a workshop, has in its short history Scaling Bitcoin workshop : Tel Aviv 2019 Elastic block caps. Elastic block caps. Meni Rosenfeld. Introduction. Thank you everyone. I am from the Israeli Bitcoin Association. I will start by describing the problem. The problem. The problem is variable transaction fees. The transaction fees shot up. There's a lot of fluctuation. To understand why this is a problem, let's go back to the basics ... The Bitcoin network has scalability problems. To increase its transaction rate and speed, micropayment channel networks have been proposed; however, these require to lock funds into specific channels. Moreover, the available space in the blockchain does not allow scaling to a worldwide payment system. We propose a new layer that sits in between the blockchain and the payment channels. The new ... Bitcoin protocol developer Mark Friedenbach introduced a method for Bitcoin scaling he claims will not require a hard fork at a workshop in Tokyo October 5. The new concept presented at the Scaling Bitcoin workshop, entitled “Forward Blocks,” suggests a major on-chain capacity boost by means of a Proof-of-Work (PoW) alternation that is done as a soft fork , combined with use of alternative ... 📰︎ r/Bitcoin 💬︎ 81 comments 👤︎ u/doctor-crypto 📅︎ Jul 29 🚨︎ report. Thank you 3D printing hobby for showing me the joys of the metric system. Such scalability and precision! Calipers make me all tingly. I bought a digital scale. Grams are just so precious. As an aside, 200C is really hot. 200F won't warm my pizza. 👍︎ 75 📰︎ r/3Dprinting 💬︎ 76 comments ...

[index] [5589] [46188] [40559] [48132] [34381] [7149] [3596] [34175] [48959] [13867]

Bitcoin 2019: Scaling to the Masses

Try watching this video on www.youtube.com, or enable JavaScript if it is disabled in your browser. Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. Bitcoin Scalability Workshops - Scaling Bitcoin 2019 "Yesod" - Day 1 - Morning - Duration: 3:44:52. Scaling Bitcoin 1,400 views. 3:44:52. Saifedean Ammous vs Peter Schiff on Sound Banking, Gold & ... Bitcoin's Lightning Network Explained For Dummies! Will This Solve Bitcoin's Scalability Problem?! - Duration: 10:57. Bitcoin for Beginners 3,194 views Bitcoin Scalability Workshops - Scaling Bitcoin 2019 "Yesod" - Day 1 - Afternoon - Duration: 4:46:13. Scaling Bitcoin Recommended for you. 4:46:13.

#