Choose the Right Database for React Native App

There are many coding languages that developers use to create online platforms, websites, and apps. One of these coding languages is React Native, which has become popular among the coding community. People from different backgrounds use React Native. Advanced iOS developers to React beginners use it in programming. Hire react native developers who can build UI, have knowledge of JavaScript and API services, can work on the frontend, and maintain cross-platform compatibility, infrastructure, and app integration.

Why Do You Need to Be Careful While Selecting the Right Database for React Native?

A developer is under a lot of stress to create an app that can be modified at any time. As a result, it might be challenging for developers to choose the correct technology stack for React Native that includes the right database.

In coding, data is of utmost importance, and it is saved in variables if it isn’t a large program. When a program or software is restarted, the data is set to its original value, which is a huge disadvantage as this can result in the loss of data. In this regard, the database plays a vital part in keeping data stored and allowing us to retrieve it even after restarting. Async Storage, SQLite, Realm, PouchDB, and more databases are available for usage in react native apps.

React Native is useful for building mobile apps for iOS and Android with its JavaScript framework. It’s built by Facebook’s JavaScript toolkit, and it helps create unique user interfaces. It’s designed for mobile platforms rather than the web.

What Are the Different React Native Databases Available for Use?

1. MMKV Storage

MMKV storage allows you to save data in a React Native application quickly. Everything is developed in C++, so it’s quick and efficient. Another benefit? The library is basic and light (50K Android/30K iOS), and it shrinks, even more, when packed. MMKV also supports redux-persist and allows you to save any form of data with or without encryption.


  • Encryption support (secure storage)
  • Multiple instances support (separate user data with global data)
  • Customize storage location
  • High performance because everything is written in C++
  • ~30x faster than AsyncStorage
  • Uses JSI instead of the “old” Bridge
  • iOS, Android, and Web support
  • Easy to use React Hooks API



For secure local storage, MMKV offers encryption capabilities. 

Synchronous storage necessitates simpler application code

It has synchronous storage.

People that are new to ignite will have a slightly longer learning curve (but still easier to use due to synchronous storage)

It is significantly faster.

The Chrome Debugger will no longer work because it is built on JSI, thus flipper is the only option.

2. Realm Database

Realm is based on SQLite. It makes use of native JavaScript objects that are dynamically mapped to a full, proprietary database engine. As a result, we can give a simple API while yet maintaining performance. Realm allows you to create advanced searches, represent complex data, and link things in a graph.

Realm is faster than even raw SQLite on common operations and has an extremely rich feature set. Realm is a lightweight database in Android, but it doesn’t use SQLite. Realm uses very less memory in comparison to SQLite. Realm is very fast in reading & writing data compared to SQLite.


  • As Realm is an object store, relationships between objects are allowed via “links.”
  • Each “link” creates a “backlink” as an inverse relationship to whichever objects are linking to the current object.
  • Realm can update its instance version.
  • Realm has “zero-copy architecture” (along with the previously mentioned lazy-loaded data access).



It outperforms other databases in terms of speed.

It has all of the necessary features that any database should have.

It is compatible with a variety of platforms.

If we can quickly store the entire structure of a Realm, I believe it will be beneficial to react-native programmers.

It is free and simple to incorporate into a project.


It comes with a lot of documentation.



3. SQLite

In mobile applications, SQLite, a C-language library, is used as the datastore. SQLite is particularly useful for offline applications, and many platforms come with SQLite support out of the box, making it simple to set up. One of the most well-known free Android development databases is this one. Its most distinguishing feature is that it is open-source, making it extremely adaptable to any developer’s project and demands.


  • The JavaScript API is the same on iOS and Android.
  • In both Java and Native modes, Android can be used.
  • Simple callbacks or Promises are used to connect with SQL transactions in JavaScript.
  • Importing a pre-populated SQLite database from the app package and sandbox
  • Callback API is supported on Windows, just like it is on iOS and Android.



SQLite is a serverless database that runs without the use of a server. On the SQLite pros and disadvantages list, this is a plus. Because the database is serverless, no server activities are required for it to function.

One of the benefits of SQLite is that it is serverless, which increases speed and reduces complexity. However, it also means that the database is only accessible on the system where it is kept. It does not, for example, allow remote work on another PC.

SQLite is a lightweight and straightforward choice. It isn’t hampered by unnecessary features. It does not require any setting (thanks to being serverless). It also doesn’t necessitate a lot of database administration and requires fewer resources.

It is unsuitable for large-scale applications.

SQLite is compact and light, making it ideal for use on a PC or phone as local storage. SQLite is less suitable for the larger program with a high number of concurrent users and a large amount of data. These applications necessitate a far more powerful, server-based database management system.

In a nutshell, it’s basic and straightforward to use.

It just has a little database. This brings us to the next point on our SQLite advantages and disadvantages list. That is, the amount of data it can hold is limited. Because SQLite maintains the complete database in a single disc file, the file’s size is limited by the system’s capabilities. So, if you need a large database, you should generally look into a client/server DBMS

Self-contained performance

SQLite is (generally) faster than its server-based or file-based counterparts. This is because it just loads the data that is required, rather than whole files.


SQLite does not require much support from the operating system or an external library, for example, because it is self-contained, which helps to improve its performance efficiency



4. Firebase

The Firebase Realtime Database is a cloud-based NoSQL database that allows you to store and sync data in real-time amongst your users. Google Firebase is a Google-backed app development platform that allows developers to create apps for iOS, Android, and the web. Firebase delivers analytics tracking, reporting, and app issue fixes, as well as marketing and product experimentation capabilities.


  • Data is synced across all clients in real-time and remains available even when an app goes offline.
  • Firebase frees developers to focus on crafting fantastic user experiences. You don’t need to manage servers.
  • You don’t need to write APIs. Firebase is your server, your API, and your datastore, all written so generically that you can modify it to suit most needs.
  • Firebase caters to a cloud-hosted database in which the data is stored as JSON and further synchronized constantly to each associated client.
  • The cloud-based database can be leveraged for managing the app’s data and providing swift data outcomes.



Authentication with email and password, Google, Facebook, and Github.

Due to Firebase’s data stream nature, queries are limited. Because traditional relational data models do not apply to NoSQL, your SQL skills will be useless.

Data in real-time.

There will be no installation on-site

APIs that are ready to use


At the data node level, security is built-in. Google Cloud Storage provides file storage.


Hosting static files


To create highly scalable apps, treat data as streams.


Don’t be concerned about the state of your infrastructure!



5. Watermelon DB

In React Native and React online projects, WatermelonDB is a novel means of managing user data. It’s designed for creating complicated React Native applications, with real-world performance as the top priority. Simply said, your software must launch quickly. Watermelon’s architecture is database-agnostic, allowing it to be used across multiple platforms. It’s a high-level data management layer that can connect to any react-native database layer, based on platform requirements.


  • Watermelon DB uses lazy loading, which means it only loads data when it is requested, making your application highly scalable.
  • Even with 10,000 records, most queries take less than 1 millisecond to complete since all querying is done on a different thread on the SQLite database.
  • Regardless of how much data you have, you can launch your app right away.
  • It’s compatible with iOS, Android, and the web.
  • It’s statically typed with Flow, a JavaScript static type checker, in mind.
  • It’s quick, asynchronous, multi-threaded, and cached to a high degree. It’s made to work with a synchronization engine to keep a react native local database in sync with a distant database.



Because it’s web-based, developers can create a web version that can also save data locally (instead of being online-only like the Realm Web SDK)

To sync data, developers have to create their backend (which means they also have to do their auth and make a rest API)

SQL is used in relational databases.



6. PouchDB

PouchDB is an open-source JavaScript database that is based on Apache CouchDB and is optimized for use in browsers. PouchDB was intended to assist web developers in creating apps that are as functional offline as they are online. It allows apps to save data locally while offline, then synchronize it with CouchDB and other compatible servers once the app is back online, ensuring that the user’s data is always up to date. PouchDB is equally effective when used offline and online. It operates offline by saving data locally and then synchronizing it with the servers and CouchDB when connected to the internet. It uses IndexedDB and WebSQL in the browser to store data locally. You can use PouchDB to interact with both distant and local databases indefinitely without seeing any discrepancies.


  • PouchDB can be used in a variety of browsers because the API it provides is the same in all of them.
  • PouchDB is simple to learn and understand if you have a basic understanding of programming languages.
  • As a lightweight API, we can easily incorporate it using the script tag.



As PouchDB is embedded in the browser, there is no need to run queries over the network, making it extremely speedy.

It is pretty slow if you have a big database or you are emitting a lot of keys.

Although you can synchronize your data with any of the supported servers, you can run apps both online and offline.



7. Vasern

Vasern is a React Native data storage system based on linked-consistent key-value stores. Its data engine is created from the ground up to provide native performance. Our goal is to provide an end-to-end database system that is open source and developer-friendly.


  • UTF-8 encoding is used for support languages.
  • Basic data types are supported (string, int, double, DateTime, and reference)
  • Make a schema. Records can be created, updated, queried, and deleted.



Vasern is designed not only for local data storage but also for cloud storage and syncing among app clients.

Unless there are severe concerns, a stable release of Vasern will be announced within a month.

JavaScript is a scripting language that may be used in both web and mobile applications.

Vasern will continue to be updated and enhanced after the stable release.

Provides easy APIs, allowing native mobile app development and programmer to concentrate on the application rather than the database setup and optimization. React Native data storage solution is quick, light, and open source.


With no effort, anyone can install, run, and sync data to their Vasern servers (under development.



8. Small Organizations Also Use Server-Side Databases for React Native App Development Like MySQL, MongoDB, and DynamoDB


Amazon DynamoDB X

MongoDB X


Amazon offers a hosted, scalable database solution with data stored in Amazon’s cloud.

One of the most prominent document storage solutions, accessible as a fully managed cloud service or for self-managed infrastructure setup.

Widely used open-source RDBMS


MongoDB, Inc



What Are the Most Important Factors That All Developers Need to Consider Before Selecting a Database?

Ensure that enough memory is available in the database for the software to execute properly. Choose a database that can handle complex data structures such as entire documents or objects. When users reconnect to the internet, it’s critical to integrate the database that allows data synchronization. Go for a database that can be combined with a minimum of effort.


When looking for the correct database for your React Native app, your developers should be very clear about all of your requirements. Realm performs admirably as a local database, regardless of the developer’s needs. SQLite does not give the same degree of security, efficiency, or flexibility that realm offers. Furthermore, while Firebase is well-founded and well-suited for constructing real-time applications, if you have a larger strategy and concept, Realm or SQLite are better choices.

Each database has its own set of key benefits, features, and drawbacks. It would be unfair to compare them because they are all unique! The goal is to examine your project’s goals, create a list of databases, study them in-depth, and choose the best one that meets your needs.

Credit: Source link

Enabling NFT Royalties With EIP-2981

With the finalization of the ERC721 standard, non-fungible tokens (NFTs) started receiving a large amount of attention. These provably unique assets are stored on the blockchain and introduce a new way to collect and trade art, music, profile pictures (PFPs), and more. In the summer of 2021, creating and selling NFTs became a quick way to accumulate wealth because of the boom in popularity.

However, when reading the underlying specification, you’ll notice no functionality for acquiring and splitting royalties in either the ERC721 or ERC1155 standards. These standards only deal with ownership state tracking, approvals, and direct transfers.

If the interface doesn’t contain any native royalty features, as a creator, how can you make NFTs that enable wealth accumulation long after the initial sale? And how do the trading platforms, such as OpenSea, take their cut? And what if you have a more complicated royalty situation, such as splitting royalties?

In this article, we will explore several aspects of royalties with NFTs. We’ll look at ways to implement royalties, including proprietary solutions, registries, and EIP-2981. We’ll also look at one way to split payments. Finally, we will run through a project and see royalty splitting in action.

Let’s get started!

What Are NFT Royalties?

Just months after the first NFTs saw the light of day, marketplace contracts were built that allowed holders to put price tags on their items, bid on and ask for them, and trade them safely with others. Many of these marketplaces don’t even store their users’ bidding transactions on-chain; their match-making contracts collect off-chain signatures for trades and store them on a centralized server infrastructure. The idea of owning something unique on a blockchain has made OpenSea one of the most successful marketplaces in the world, constantly being number one on the gas guzzler leaderboard.

The true success story of those progressively decentralizing marketplaces is written from the fees they take per trade. For example, for each Bored Ape traded at 100 Eth, OpenSea steadily earns a whopping 2.5 Eth for fulfilling the deal on-chain. An incentive to retain creators on their marketplace platforms was to provide them with an option to profit long-term from secondary market sales. That is commonly referred to as “royalties” in the NFT space.

Earning Money for NFT Creators: Minting Fees and Royalties

When someone launches a basic NFT collection by deploying an ERC721 contract, they’ll first have to think about minting, or how do new tokens come into life?

It might surprise you that ERC721 does not define any default minting rules itself. Even its official specification mentions the term “mint” only once. However, it became a common sense that nearly all collectible NFT contracts contain a “mint” method that’s invoked by sending fees. Newly minted tokens are instantly tradeable on marketplace platforms, and depending on their social market mechanics, freshly minted assets might sell for a multitude of their minting fees within hours.

Minting fees can be profitable to the beneficiary account of an NFT contract. However, the major revenue driver of popular collections is royalties, also known as fees that marketplaces split off the sales price when items are traded on their platform. Since ERC721 doesn’t pertain to economic concepts and even less about NFT trading, how does an NFT collection enforce a royalty cut for secondary sales? The simple answer? It cannot.

It’s important to understand that royalties are not a concept that’s enforceable by a collection itself. There have been attempts to build collection contracts that come with their own marketplace logic and prohibit transfers outside of their controlled environment, but they never gained much attention since markets are made on the dominating platforms.

For those, royalty payments are a voluntary concept that each marketplace implements individually. So when NFT contracts are just dealing with registries and marketplaces that manage royalty payments individually, how can a collection owner define their royalty scheme so that every marketplace supports it?

Proprietary Solutions

The simplest way for a marketplace to figure out how many royalty fees to collect and where to transfer them is by relying on its own proprietary interface that’s implemented by the collection. A good example is the Rarible exchange contract that tries to support all kinds of external royalty interfaces, among them two that Rarible defined themselves, being an early player in the NFT space:

interface RoyaltiesV1 {
    event SecondarySaleFees(uint256 tokenId, address[] recipients, uint[] bps);
    function getFeeRecipients(uint256 id) external view returns (address payable[] memory);
    function getFeeBps(uint256 id) external view returns (uint[] memory);

interface RoyaltiesV2 {
    event RoyaltiesSet(uint256 tokenId, LibPart.Part[] royalties);
    function getRaribleV2Royalties(uint256 id) external view returns (LibPart.Part[] memory);

NFT collections could implement those interfaces to return the number of royalties and an array of receivers. Then, when trade on the marketplace contract takes place, the trading contract checks whether the involved NFT collection implemented one of those interfaces, calls it and uses its return values to split fees accordingly.

Note that the actual sales price is not part of the methods’ interfaces. Instead, they’re yielding the royalty share as basis points (bps), a term commonly used in royalty distribution schemes and usually translates to 1/10000—a share of 500 means that 5% of the trade value should be sent to the collection owner as royalties.

Royalty Registries

However, proprietary interfaces can cause issues. The NFT contract authors cannot know which interfaces might become mandatory to implement since they cannot predict which marketplaces their tokens will be traded on. Even worse, if they launch a collection contract before publishing the relevant marketplace contracts, there’s usually no easy way for them to later add the respective royalty distribution scheme.

To solve this issue, a consortium of major NFT marketplaces around agreed to deploy an industry-wide registry contract that collection builders can use to signal royalty splits independently from their token contracts. The Royalty Registry’s open-source code base reveals that it supports many of the most important marketplace interfaces.

For example, if an NFT collection owner only implemented one of Rarible’s royalty distribution schemes mentioned above, another marketplace that’s not aware of that interface can simply call the common registry’s getRoyaltyView function. It tries to query all known royalty interfaces on the token contract and translates any response to a commonly useable result.

The registry even goes a step further. Collection owners who haven’t put any royalty signaling scheme into their contract can deploy an extended “override” contract and register it with the common registry. This registration method will ensure that only collection owners (identified by the owner public member) can call it.

EIP-2981: A Standard for Signaling NFT Royalties Across Marketplaces

In 2020, some ambitious individuals started to define a common interface that’s flexible enough to cover most royalty-related use cases and that’s simple to understand and implement: EIP-2981. It defines only one method that NFT contracts can implement:

function royaltyInfo(uint256 _tokenId,  uint256 _salePrice) 
  external view 
  returns (address receiver, uint256 royaltyAmount);

Note its intentional lack of features: It neither cares about a split between several parties nor does it impose any notion of percentages or base points. It’s crystal clear to callers what they’ll receive as a return value and straightforward for implementers how to achieve that.

The interface also completely works off-chain, so marketplaces that trade assets on alternative infrastructure can still query the creator fee without knowing anything else besides the interface signature of the EIP-2981 method.

The interface works for sale amounts denoted in Eth, as well as any other currency. An implementer only has to divide _salePrice by their calculation base and multiply it with the royalty percentage on the same base. While implementers could run complex logic that computes a dynamic royalty depending on external factors, it’s advisable to keep this method’s execution as small as possible, since it will be executed during the sales transfer transactions between trading parties, and their gas fees are supposed to be rather low.

To give you an idea of what a non-trivial EIP-2981 implementation could look like, here’s a snippet you could find on 1/1 NFT collections that signal the original creator’s address and their royalty claim to any marketplace compatible with the standard:


(unpacked below)

// contracts/Splice.sol
// SPDX-License-Identifier: MIT
pragma solidity 0.8.10;

import '@openzeppelin/contracts/utils/math/SafeMath.sol';

contract OneOnOneNFTMarketPlace {
  using SafeMath for uint256;

  struct RoyaltyReceiver {
    address creator;
    uint8 royaltyPercent;

  mapping(uint256 => RoyaltyReceiver) royalties;

  function mint(
    /* args...*/
    uint8 _royaltyPercent
  ) public {
    //... minting logic ...
    uint256 token_id = 1;
    royalties[token_id] = RoyaltyReceiver({
      creator: msg.sender,
      royaltyPercent: _royaltyPercent

  function royaltyInfo(uint256 tokenId, uint256 salePrice)
    returns (address receiver, uint256 royaltyAmount)
    receiver = royalties[tokenId].creator;
    royaltyAmount = (royalties[tokenId].royaltyPercent * salePrice).div(100);

If you’re using OpenZeppelin’s ERC721 base contracts to build NFT contracts, you might already have noticed that they recently added an ERC721Royalty base contract that contains management methods and private members to simplify handling dedicated token royalties.

Royalties for ERC1155 Prints

Marketplaces aren’t the only applications that let their users profit from royalty schemes. For example, Treum’s EulerBeats uses the multi-token standard ERC1155 in their collection of contracts, which represent NFTs that combine computer-generated tunes and generative artworks. After minting a seed token, users can derive a limited amount of prints from it, and the price for each print increases along a bonding curve defined by the token contract.

Every time a new print of an Enigma seed is minted, the contract transfers a 50% royalty cut of the minting fee to the current seed’s owner. If the receiving side implements the platform-specific IEulerBeatsRoyaltyReceiver interface, it can even react to royalty payouts and execute code once a print of their seed has been minted.

PaymentSplitters: Sending NFT Royalties to More Than One Receiver

EIP-2981 falls short of a use case that other approaches solve out of the box. It can only signal one recipient address for royalties to the requesting side. As a result, situations that require royalties to be split among several recipients must be implemented individually.

This can impose several new issues: First, the caller/marketplace doesn’t necessarily have to send funds along with the same transaction that triggered the trade but could decide to do so later, such as in a gas-efficient multi-call from another account. Second, payout calls to addresses might be highly restricted in gas usage. Any default receiver function in Solidity is highly encouraged to use as little gas as possible since senders might not be aware that they’re transferring funds to a contract.

The most important consideration is that sending money directly from contract interactions imposes the risk of running into reentrancy holes; that’s why it’s highly advisable to favor pull mechanics that allow beneficiaries to withdraw their earnings from time to time instead of pushing funds directly to addresses unknown to the calling contract.

Luckily, OpenZeppelin’s base contracts cover us again. Their PaymentSplitter primitive allows setting up individual split contracts that keep funds safe until their payees claim them, and their receive function requires the bare minimum of gas to run. NFT collection builders can create an inline PaymentSplitter containing the wanted list of beneficiaries and their respective share amounts and let their EIP-2981 implementation yield the address of that split contract.

The tradeoffs of that approach might be neglectable for many use cases: PaymentSplitter deployments are comparatively gas-intensive and it’s impossible to replace payees or shares once a splitter has been initialized. A sample implementation of how to effectively replace splitter participants and instantiate gas-efficient subcontracts can be found in the generative art project Splice.

Testing NFT Royalty Payouts With a Local Mainnet Fork

Engineering marketplaces that interact with an arbitrary NFT contract is not a simple task since it’s unpredictable whether contracts on live networks behave according to the ERC interfaces. However, it can be helpful to test our code against these contracts using Ganache. This powerful tool lets us create an instant fork of the Ethereum network on our local machine without setting up our own blockchain node. Instead, it relies on Infura nodes to read the current state of contracts and accounts we’re interacting with.

Before we start our blockchain instance, let’s clone the repository of our proof-of-concept, change it into the new directory, and install any dependencies:

git clone
cd royalty-marketplace
npm i

To see what’s going on in this NFT marketplace example, let’s take a look at the ClosedDesert.sol code in the contracts folder.

// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;

import "@openzeppelin/contracts/token/ERC721/IERC721.sol";
import "@openzeppelin/contracts/utils/Address.sol";
import "@openzeppelin/contracts/security/ReentrancyGuard.sol";
import "@openzeppelin/contracts/token/common/ERC2981.sol";
import "@manifoldxyz/royalty-registry-solidity/contracts/IRoyaltyEngineV1.sol";

struct Offer {
  IERC721 collection;
  uint256 token_id;
  uint256 priceInWei;
 * a fixed reserve price marketplace
contract ClosedDesert is ReentrancyGuard {

  mapping(bytes32 => Offer) public offers;

  IRoyaltyEngineV1 royaltyEngineMainnet = IRoyaltyEngineV1(0x0385603ab55642cb4Dd5De3aE9e306809991804f);

  event OnSale(bytes32 offerHash, address indexed collection, uint256 token_id, address indexed owner);
  event Bought(address indexed collection, uint256 token_id, address buyer, uint256 price);

  function sellNFT(IERC721 collection, uint256 token_id, uint256 priceInWei) public {
    require(collection.ownerOf(token_id) == msg.sender, "must own the NFT");
    require(collection.getApproved(token_id) == address(this), "must approve the marketplace to sell");

    bytes32 offerHash = keccak256(abi.encodePacked(collection, token_id));
    offers[offerHash] = Offer({
      collection: collection,
      token_id: token_id,
      priceInWei: priceInWei
    emit OnSale(offerHash, address(collection), token_id, msg.sender);

  function buyNft(bytes32 offerHash) public payable nonReentrant {
    Offer memory offer = offers[offerHash];
    require(address(offer.collection) != address(0x0), "no such offer");
    require(msg.value >= offer.priceInWei, "reserve price not met");

    address payable owner = payable(offer.collection.ownerOf(offer.token_id));

    emit Bought(address(offer.collection), offer.token_id, msg.sender, offer.priceInWei);

    // effect: clear offer
    delete offers[offerHash];

    (address payable[] memory recipients, uint256[] memory amounts) =
      royaltyEngineMainnet.getRoyalty(address(offer.collection), offer.token_id, msg.value);

    uint256 payoutToSeller = offer.priceInWei;

    //transfer royalties
    for(uint i = 0; i < recipients.length; i++) {
      payoutToSeller = payoutToSeller - amounts[i];
      Address.sendValue(recipients[i], amounts[i]);
    //transfer remaining sales revenue to seller
    Address.sendValue(owner, payoutToSeller);

    //finally transfer asset
    offer.collection.safeTransferFrom(owner, msg.sender, offer.token_id);


In our example, sellers can list their assets for a fixed sales price after being approved for transfers. Buyers can watch for OnSale events and respond by issuing buyNft transactions and sending along the wanted Eth value. The marketplace contract checks the open mainnet NFT royalties registry during a sale transaction to see whether the collection owners are requesting royalties and then pays them out accordingly. As stated above, the public royalty registry already takes EIP-2981 compatible contracts into account. Still, it supports many other proprietary distribution schemes as well.

Next, we will deploy our local blockchain instance and test our contract using the accounts and NFTs of real users.

To test the contract behavior under mainnet conditions, we first need access to an Infura mainnet node by requesting a project id and installing Ganache v7 locally on our machine. We can then use our favorite NFT marketplace to look up a collection and find an NFT holder account that will play the seller role in our test. The seller must actually own the NFT we will sell.

Finally, find an account with sufficient mainnet funds (at least 1 Eth) to pay for the seller’s requested sales price. With these accounts and tools at hand, we can spin up a local Ganache mainnet instance using the following command in a new terminal window:

npx ganache --fork<infuraid> --unlock <0xseller-account> --unlock <0xbuyer-account>

Be sure to use your own Infura mainnet endpoint for the URL in the command above.


If you are having trouble finding accounts to unlock, here are a couple to try:

Seller Address:0x27b4582d577d024175ed7ffb7008cc2b1ba7e1c2
Buyer Address:0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045

Note: Because we are simulating the Ethereum mainnet in our Ganache instance, by the time you read this, the seller may no longer own the NFT we will be selling or the buyer may no longer have enough Eth to actually make the purchase. So if these addresses don’t work, you will have to find ones that meet the above criteria.

Using the example addresses above, our command looks like this:

npx ganache --fork<infuraid> --unlock 0x27b4582d577d024175ed7ffb7008cc2b1ba7e1c2 --unlock 0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045

Next, in our original terminal window, we’ll compile and deploy the marketplace contract from the repository and choose our local mainnet fork provider, which can be found in the truffle-config.js:

npx truffle compile 
npx truffle migrate --network mainfork

Now we can test our royalty-aware marketplace contract under mainnet conditions without paying a penny for gas costs. All upcoming transactions will be executed by the local Ganache chain on behalf of the accounts of real users.

Let’s take a look at the testMarketplace.js script (found in the scripts folder) we will use to interact with our deployed marketplace smart contract:

const ClosedDesert = artifacts.require("ClosedDesert");
const IErc721 = require("../build/contracts/IERC721.json");

//Change these constants:
const collectionAddress = "0xed5af388653567af2f388e6224dc7c4b3241c544"; // Azuki
const tokenId = 9183;
let sellerAddress = "0x27b4582d577d024175ed7ffb7008cc2b1ba7e1c2";
const buyerAddress = "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045";

module.exports = async function(callback) {
  try {
    const marketplace = await ClosedDesert.deployed();
    const erc721 = new web3.eth.Contract(IErc721.abi, collectionAddress);
    const salesPrice = web3.utils.toWei("1", "ether");

    //buyerAddress = await web3.utils.toChecksumAddress(buyerAddress);

    // marketplace needs the seller's approval to transfer their tokens
    const approval = await erc721.methods.approve(marketplace.address, tokenId).send({from: sellerAddress});
    const sellReceipt = await marketplace.sellNFT(collectionAddress, tokenId, salesPrice, {
      from: sellerAddress
    const { offerHash } = sellReceipt.logs[0].args;

    const oldOwner = await erc721.methods.ownerOf(tokenId).call();
    console.log(`owner of ${collectionAddress} #${tokenId}`, oldOwner);

    const oldSellerBalance = web3.utils.toBN(await web3.eth.getBalance(sellerAddress));
    console.log("Seller Balance (Eth):", web3.utils.fromWei(oldSellerBalance));

    // buyer buys the item for a sales price of 1 Eth
    const buyReceipt = await marketplace.buyNft(offerHash, {from: buyerAddress, value: salesPrice});
    const newOwner = await erc721.methods.ownerOf(tokenId).call();
    console.log(`owner of ${collectionAddress} #${tokenId}`, newOwner);

    const newSellerBalance = web3.utils.toBN(await web3.eth.getBalance(sellerAddress));
    console.log("Seller Balance (Eth):", web3.utils.fromWei(newSellerBalance));
    console.log("Seller Balance Diff (Eth):", web3.utils.fromWei(newSellerBalance.sub(oldSellerBalance)));

  } catch(e) {
  } finally {

Note: ThecollectionAddress, sellerAddress, and buyerAddress constants must all be legitimate mainnet addresses that meet the before-mentioned criteria, while the sellerAddress and buyerAddress must both be unlocked in your Ganache instance. The tokenId constant must also be the actual tokenId of the NFT the seller owns.

In this helper script, we’re setting up references to the contracts we will interact with. We decided to get the EIP-2981 compatible Azuki collection in the sample code, but it could be any NFT collection. We run the script using the following command:

npx truffle exec scripts/testMarketplace.js --network mainfork

If everything ran correctly, you should receive output in your console like the following:

owner of Azuki 0xed5af388653567af2f388e6224dc7c4b3241c544 #9183 0x27b4582D577d024175ed7FFB7008cC2B1ba7e1C2
Seller Balance (Eth): 0.111864414925655418
owner of Azuki 0xed5af388653567af2f388e6224dc7c4b3241c544 #9183 0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045
Seller Balance (Eth): 1.061864414925655418
Seller Balance Diff (Eth): 0.95

Let’s run through the steps that just happened so we can understand how it works. First, the script calls for the seller’s approval to transfer their NFT once it’s sold, a step usually handled by the respective marketplace contracts. Then, we create a sales offer by calling sellNft on behalf of the current owner. Finally, we simply reuse the offer hash contained in the sale event and let our buyer call the buyNft method and send the requested sales price of 1 Eth.

When you compare the seller’s balance before and after the trade, you’ll notice that they didn’t receive the requested amount of 1 Eth, but only 0.95. The remaining funds have been transferred to Azuki’s royalty recipients as they were signaled by the mainnet royalty registry contract.


Royalties are the primary driver of success in the NFT space. Previously being an add-on feature of proprietary marketplaces, they have evolved into a mandatory property of the non-fungible token economy. They extend the promise to any NFT collection builder to take profits when their creations start to attract a broad audience. They are a great economic concept to distribute sales revenue in a way that provides incentives to the original code authors or NFT artists.

ERC721 doesn’t contain any notion of economic features; hence NFT royalties cannot be directly enforced by the token contracts. Instead, marketplace builders had to provide interfaces for token contracts to signal their claim on trading fees and where to send them. The EIP-2981 royalty signaling interface is a concise and powerful industry standard to achieve that without adding more complexity to the implementer’s side. Every new ERC721 contract should consider implementing at least a basic royalty signal so proprietary marketplace tools can pick it up and refer to it.

Credit: Source link

Accepting Crypto Payments in Classic Commerce App

E-commerce storefronts have been slow to offer crypto payment methods to their customers. Crypto payment plug-ins or payment gateway integrations aren’t generally available, or they rely on third-party custodians to collect, exchange, and distribute money. Considering the growing ownership rate and experimentation ratio of cryptocurrencies, a “pay with crypto” button could greatly drive sales.

This article demonstrates how you can integrate a custom, secure crypto payment method into any online store without relying on a third-party service. Coding and maintaining smart contracts needs quite some heavy lifting under the hood, a job that we’re handing over to Truffle suite, a commonly used toolchain for blockchain builders. To provide access to blockchain nodes during development and for the application backend, we rely on Infura nodes that offer access to the Ethereum network at a generous free tier. Using these tools together will make the development process much easier.

Scenario: The Amethon Bookstore

The goal is to build a storefront for downloadable eBooks that accepts the Ethereum blockchain’s native currency (“Ether”) and ERC20 stablecoins (payment tokens pegged in USD) as a payment method. Let’s refer to it as “Amethon” from here on. Full implementation can be found on the accompanying GitHub monorepo. All code is written in Typescript and can be compiled using the package’syarn build oryarn devcommands.

We’ll walk you through the process step by step, but familiarity with smart contracts, Ethereum, and minimal knowledge of the Solidity programming language might be helpful to read along. We recommend you to read some fundamentals first to become familiar with the ecosystem’s basic concepts.

Application Structure

The store backend is built as a CRUD API that is not connected to any blockchain itself. Its front end triggers payment requests on that API, which customers fulfill using their crypto wallets.

Amethon is designed as a “traditional” ecommerce application that takes care of the business logic and doesn’t rely on any on-chain data besides the payment itself. During checkout, the backend issues PaymentRequest objects that carry a unique identifier (such as an “invoice number”) that users attach to their payment transactions.

A background daemon listens to the respective contract events and updates the store’s database when it detects a payment.

Payment settlements on Amethon

The PaymentReceiver Contract

At the center of Amethon, thePaymentReceiversmart contract accepts and escrows payments on behalf of the storefront owner.

Each time a user sends funds to thePaymentReceiver contract, aPaymentReceivedevent is emitted containing information about the payment’s origin (the customer’s Ethereum account), its total value, the ERC20 token contract address utilized, and thepaymentIdthat refers to the backend’s database entry.

 event PaymentReceived(
    address indexed buyer,
    uint256 value,
    address token,
    bytes32 paymentId

Ethereum contracts act similarly to user-based (aka “externally owned” / EOA) accounts and get their own account address upon deployment. Receiving the native Ether currency requires implementing thereceiveandfallback functions which are invoked when someone transfers Ether funds to the contract, and no other function signature matches the call:

 receive() external payable {
    emit PaymentReceived(msg.sender, msg.value, ETH_ADDRESS, bytes32(0));

  fallback() external payable {
    emit PaymentReceived(
      msg.sender, msg.value, ETH_ADDRESS, bytes32(;

The official Solidity docs point out the subtle difference between these functions:receiveis invoked when the incoming transaction doesn’t contain additional data, otherwise fallback is called. The native currency of Ethereum itself is not an ERC20 token and has no utility besides being a counting unit. However, it has an identifiable address (0xEeeeeEeeeEeEeeEeEeEeeEEEeeeeEeeeeeeeEEeE) that we use to signal an Ether payment in ourPaymentReceivedevents.

Ether transfers, however, have a major shortcoming: the amount of allowed computation upon reception is extremely low. The gas sent along by customers merely allows us to emit an event but not to redirect funds to the store owner’s original address. Therefore, the receiver contract keeps all incoming Ethers and allows the store owner to release them to their own account at any time:

function getBalance() public view returns (uint256) {
  return address(this).balance;

function release() external onlyOwner {
  (bool ok, ) ={value: getBalance()}("");
  require(ok, "Failed to release Eth");

Accepting ERC20 tokens as payment is slightly more difficult for historical reasons. In 2015, the authors of the initial specification couldn’t predict the upcoming requirements and kept the ERC20 standard’s interface as simple as possible. Most notably, ERC20 contracts aren’t guaranteed to notify recipients about transfers, so there’s no way for ourPaymentReceiver to execute code when ERC20 tokens are transferred to it.

The ERC20 ecosystem has evolved and now includes additional specs. For example, the EIP 1363 standard addresses this very problem. Unfortunately, you cannot rely on major stablecoin platforms to have implemented it.

So Amethon must accept ERC20 token payments in the “classic” way. Instead of “dropping” tokens on it unwittingly, the contract takes care of the transfer on behalf of the customer. This requires users to first allow the contract to handle a certain amount of their funds. This inconveniently requires users to first transmit anApproval transaction to the ERC20 token contract before interacting with the real payment method. EIP-2612 might improve this situation, however, we have to play by the old rules for the time being.

 function payWithErc20(
    IERC20 erc20,
    uint256 amount,
    uint256 paymentId
  ) external {
    erc20.transferFrom(msg.sender, _owner, amount);
    emit PaymentReceived(

Compiling, Deploying, and Variable Safety

Several toolchains allow developers to compile, deploy, and interact with Ethereum smart contracts, but one of the most advanced ones is the Truffle Suite. It comes with a built-in development blockchain based on Ganache and a migration concept that allows you to automate and safely run contract deployments.

Deploying contracts on “real” blockchain infrastructure, such as Ethereum testnets, requires two things: an Ethereum provider that’s connected to a blockchain node and either the private keys/wallet mnemonics of an account or a wallet connection that can sign transactions on behalf of an account. The account also needs to have some (testnet) Ethers on it to pay for gas fees during deployment.

MetaMask does that job. Create a new account that you’re not using for anything else but deployment (it will become the “owner” of the contract) and fund it with some Ethers using your preferred testnet’s faucet (we recommend Paradigm). Usually, you would now export that account’s private key (“Account Details” > “Export Private Key”) and wire it up with your development environment but to circumvent all security issues implied by that workflow, Truffle comes with a dedicated dashboard network and web application that can be used to sign transactions like contract deployments using Metamask inside a browser. To start it up, execute truffle dashboard in a fresh terminal window and visit http://localhost:24012/ using a browser with an active Metamask extension.

Using truffle’s dashboard to sign transactions without exposing private keys

The Amethon project also relies on various secret settings. Note that due to the way dotenv-flowworks,.envfiles contain samples or publicly visible settings, which are overridden by gitignored.env.localfiles. Copy all .env files in the packages’ subdirectories to.env.locals and override their values.

To connect your local environment to an Ethereum network, access a synced blockchain node. While you certainly could download one of the many clients and wait for it to sync on your machine, it is far more convenient to connect your applications to Ethereum nodes that are offered as a service, the most well-known being Infura. Their free tier provides you with three different access keys and 100k RPC requests per month supporting a wide range of Ethereum networks.

After signup, take note of your Infura key and put it in your contracts .env.localas INFURA_KEY.

If you’d like to interact with contracts, e.g. on the Kovan network, simply add the respective truffle configuration and an--network kovan option to all your truffle commands. You can even start an interactive console:yarn truffle console --network kovan. There isn’t any special setup process needed to test contracts locally. To make our lives simple we’re using the providers and signers injected by Metamask through the truffle dashboard provider instead.

Change to thecontracts folder and runyarn truffle develop. This will start a local blockchain with pre-funded accounts and open a connected console on it. To connect your Metamask wallet to the development network, create a new network using http://localhost:9545 as its RPC endpoint. Take note of the accounts listed when the chain starts: you can import their private keys into your Metamask wallet to send transactions on their behalf on your local blockchain.

Typecompileto compile all contracts at once and deploy them to the local chain with migrate.You can interact with contracts by requesting their currently deployed instance and call its functions like so:

pr = await PaymentReceiver.deployed()
balance = await pr.getBalance()

Once you’re satisfied with your results, you can then deploy them on a public testnet (or mainnet), as well:

yarn truffle migrate --interactive --network dashboard

The Backend

The Store API / CRUD

Our backend provides a JSON API to interact with payment entities on a high level. We’ve decided to use TypeORM and a local SQLite database to support entities for Books and PaymentRequests. Books represent our shop’s main entity and have a retail price, denoted in USD cents. To initially seed the database with books, you can use the accompanyingseed.ts file. After compiling the file, you can execute it by invokingnode build/seed.js.

import { Entity, Column, PrimaryColumn, OneToMany } from "typeorm";
import { PaymentRequest } from "./PaymentRequest";

export class Book {
  ISBN: string;

  title: string;

  retailUSDCent: number;

    () => PaymentRequest,
    (paymentRequest: PaymentRequest) =>
  payments: PaymentRequest[];

Heads up: storing monetary values as float values are strongly discouraged on any computer system because operating on float values will certainly introduce precision errors. This is also why all crypto tokens operate with 18 decimal digits and Solidity doesn’t even have a float data type. 1 Ether actually represents “1000000000000000000” Wei, the smallest Ether unit.

For users who intend to buy a book from Amethon, create an individualPaymentRequestfor their item first by calling the/books/:isbn/orderroute. This creates a new unique identifier that must be sent along with each request.

We’re using plain integers here, however, for real-world use cases, you’ll use something more sophisticated. The only restriction is the id’s binary length which must fit into 32 bytes (uint256). EachPaymentRequest inherits the book’s retail value in USD cents and bears the customer’s address, fulfilledHash and paidUSDCentwill be determined during the buying process.

export class PaymentRequest {
  id: number;

  @Column("varchar", { nullable: true })
  fulfilledHash: string | null;

  address: string;

  priceInUSDCent: number;

  @Column("mediumint", { nullable: true })
  paidUSDCent: number;

  @ManyToOne(() => Book, (book) => book.payments)
  book: Book;

An initial order request that creates a PaymentRequestentity looks like this:

POST http://localhost:3001/books/978-0060850524/order
Content-Type: application/json

  "address": "0xceeca1AFA5FfF2Fe43ebE1F5b82ca9Deb6DE3E42"
  "paymentRequest": {
    "book": {
      "ISBN": "978-0060850524",
      "title": "Brave New World",
      "retailUSDCent": 1034
    "address": "0xceeca1AFA5FfF2Fe43ebE1F5b82ca9Deb6DE3E42",
    "priceInUSDCent": 1034,
    "fulfilledHash": null,
    "paidUSDCent": null,
    "id": 6
  "receiver": "0x7A08b6002bec4B52907B4Ac26f321Dfe279B63E9"

The Blockchain Listener Background Service

Querying a blockchain’s state tree doesn’t cost clients any gas but nodes still need to compute. When those operations become too computation-heavy, they can time out. For real-time interactions, it is highly recommended to not poll chain state but rather listen to events emitted by transactions. This requires the use of WebSocket-enabled providers, so make sure to use the Infura endpoints that start with wss://as URL scheme for your backend’s PROVIDER_RPC environment variable. Then you can start the backend’s daemon.tsscript and listen for PaymentReceivedevents on any chain:

  const web3 = new Web3(process.env.PROVIDER_RPC as string);
  const paymentReceiver = new web3.eth.Contract(
    paymentReceiverAbi as AbiItem[],
    process.env.PAYMENT_RECEIVER_CONTRACT as string

  const emitter ={
    fromBlock: "0",

  emitter.on("data", handlePaymentEvent);

Take note of how we’re instantiating the Contract instance with an Application Binary Interface. The Solidity compiler generates the ABI and contains information for RPC clients on how to encode transactions to invoke and decode functions, events, or parameters on a smart contract.

Once instantiated, you can hook a listener on the contract’s PaymentReceived logs (starting at block 0) and handle them once received.

Since Amethon supports Ether and stablecoin (“USD”) payments, the daemon’s handlePaymentEventmethod first checks which token has been used in the user’s payment and computes its dollar value, if needed:

const ETH_USD_CENT = 2_200 * 100;
const ACCEPTED_USD_TOKENS = (process.env.STABLECOINS as string).split(",");
const NATIVE_ETH = "0xEeeeeEeeeEeEeeEeEeEeeEEEeeeeEeeeeeeeEEeE";

const handlePaymentEvent = async (event: PaymentReceivedEvent) => {
  const args = event.returnValues;
  const paymentId = web3.utils.hexToNumber(args.paymentId);
  const decimalValue = web3.utils.fromWei(args.value);
  const payment = await paymentRepo.findOne({ where: { id: paymentId } });
  let valInUSDCents;
  if (args.token === NATIVE_ETH) {
    valInUSDCents = parseFloat(decimalValue) * ETH_USD_CENT;
  } else {
    if (!ACCEPTED_USD_TOKENS.includes(args.token)) {
      return console.error("payments of that token are not supported");
    valInUSDCents = parseFloat(decimalValue) * 100;

  if (valInUSDCents < payment.priceInUSDCent) {
    return console.error(`payment [${paymentId}] not sufficient`);

  payment.paidUSDCent = valInUSDCents;
  payment.fulfilledHash = event.transactionHash;

The Frontend

Our bookstore’s frontend is built on the official Create React App template with Typescript support and uses Tailwind for basic styles. It supports all known CRA scripts so you can start it locally by yarn start after you created your own .env.local file containing the payment receiver and stablecoin contract addresses you created before.

Heads up: CRA5 bumped their webpack dependency to a version that no longer supports node polyfills in browsers. This breaks the builds of nearly all Ethereum-related projects today. A common workaround that avoids ejecting is to hook into the CRA build process. We’re using react-app-rewired but you could simply stay at CRA4 until the community comes up with a better solution.

Connecting a WEB3 Wallet

The crucial part of any Dapp is connecting to a user’s wallet. You could try to manually wire that process following the official MetaMask docs but we strongly recommend using an appropriate React library. We found Noah Zinsmeister’s web3-react to be the best. Detecting and connecting a web3 client boils down to this code (ConnectButton.tsx):

import { useWeb3React } from "@web3-react/core";
import { InjectedConnector } from "@web3-react/injected-connector";
import React from "react";
import Web3 from "web3";

export const injectedConnector = new InjectedConnector({
  supportedChainIds: [42, 1337, 31337], //Kovan, Truffle, Hardhat

export const ConnectButton = () => {
  const { activate, account, active } = useWeb3React<Web3>();

  const connect = () => {
    activate(injectedConnector, console.error);

  return active ? (
    <div className="text-sm">connected as: {account}</div>
  ) : (
    <button className="btn-primary" onClick={connect}>

By wrapping your App‘s code in an <Web3ReactProvider getLibrary={getWeb3Library}> context you can access the web3 provider, account, and connected state using theuseWeb3Reacthook from any component. Since Web3React is agnostic to the web3 library being used (Web3.js or ethers.js), you must provide a callback that yields a connected “library”:

import Web3 from "web3";
function getWeb3Library(provider: any) {
  return new Web3(provider);

Payment Flows

After loading the available books from the Amethon backend, the <BookView>component first checks whether payments for this user have already been processed and then displays all supported payment options bundled inside the <PaymentOptions> component.

Paying With ETH

The <PayButton> is responsible for initiating direct Ether transfers to the PaymentReceivercontract. Since these calls are not interacting with the contract’s interface directly, we don’t even need to initialize a contract instance:

const weiPrice = usdInEth(paymentRequest.priceInUSDCent);

const tx = web3.eth.sendTransaction({
  from: account, //the current user
  to: paymentRequest.receiver.options.address, //the PaymentReceiver contract address
  value: weiPrice, //the eth price in wei (10**18)
  data: paymentRequest.idUint256, //the paymentRequest's id, converted to a uint256 hex string
const receipt = await tx;

As explained earlier, since the new transaction carries a field, Solidity’s convention triggers the PaymentReceiver's fallback() external payable function that emits a PaymentReceivedevent with Ether’s token address. This is picked up by the daemonized chain listener that updates the backend’s database state accordingly.

A static helper function is responsible for converting the current dollar price to an Ether value. In a real-world scenario, query the exchange rates from a trustworthy third party like Coingecko or from a DEX like Uniswap. Doing so allows you to extend Amethon to accept arbitrary tokens as payments.

const ETH_USD_CENT = 2_200 * 100;
export const usdInEth = (usdCent: number) => {
  const eth = (usdCent / ETH_USD_CENT).toString();
  const wei = Web3.utils.toWei(eth, "ether");
  return wei;

Paying With ERC20 Stablecoins

For reasons mentioned earlier, payments in ERC20 tokens are slightly more complex from a user’s perspective since one cannot simply drop tokens on a contract. Like nearly anyone with a comparable use case, we must first ask the user to give their permission for our PaymentReceivercontract to transfer their funds and call the actual payWithEerc20 method that transfers the requested funds on behalf of the user.

Here’s the PayWithStableButton‘s code for giving permission on a selected ERC20 token:

const contract = new web3.eth.Contract(
  IERC20ABI as AbiItem[],

const appr = await coin.methods
    paymentRequest.receiver.options.address, //receiver contract's address
    price // USD value in wei precision (1$ = 10^18wei)
    from: account,

Note that the ABI needed to set up a Contract instance of the ERC20 token receives a general IERC20 ABI. We’re using the generated ABI from OpenZeppelin’s official library but any other generated ABI would do the job. After approving the transfer we can initiate the payment:

const contract = new web3.eth.Contract(
  PaymentReceiverAbi as AbiItem[],
const tx = await contract.methods
    process.env.REACT_APP_STABLECOINS, //identifies the ERC20 contract
    weiPrice, //price in USD (it's a stablecoin)
    paymentRequest.idUint256 //the paymentRequest's id as uint256
    from: account,

Signing Download Requests

Finally, our customer can download their eBook. But there’s an issue: Since we don’t have a “logged in” user, how do we ensure that only users who actually paid for content can invoke our download route? The answer is a cryptographic signature. Before redirecting users to our backend, the <DownloadButton> component allows users to sign a unique message that is submitted as proof of account control:

const download = async () => {
  const url = `${process.env.REACT_APP_BOOK_SERVER}/books/${book.ISBN}/download`;

  const nonce = Web3.utils.randomHex(32);
  const dataToSign = Web3.utils.keccak256(`${account}${book.ISBN}${nonce}`);

  const signature = await web3.eth.personal.sign(dataToSign, account, "");

  const resp = await (
        address: account,
      { responseType: "arraybuffer" }
  // present that buffer as download to the user...

The backend’s download route can recover the signer’s address by assembling the message in the same way the user did before and calling the crypto suite’s ecrecover method using the message and the provided signature. If the recovered address matches a fulfilled PaymentRequest on our database, we know that we can permit access to the requested eBook resource:

  async (req: DownloadBookRequest, res: Response) => {
    const { signature, address, nonce } = req.body;

    //rebuild the message the user created on their frontend
    const signedMessage = Web3.utils.keccak256(

    //recover the signer's account from message & signature
    const signingAccount = await web3.eth.accounts.recover(

    if (signingAccount !== address) {
      return res.status(401).json({ error: "not signed by address" });

    //deliver the binary content...

The proof of account ownership presented here is still not infallible. Anyone who knows a valid signature for a purchased item can successfully call the download route. The final fix would be to create the random message on the backend first and have the customer sign and approve it. Since users cannot make any sense of the garbled hex code they’re supposed to sign, they won’t know if we’re going to trick them into signing another valid transaction that might compromise their accounts.

Although we’ve avoided this attack vector by making use of web3’s eth.personal.signmethod it is better to display the message to be signed in a human-friendly way. That’s what EIP-712 achieves—a standard already supported by MetaMask.

Conclusion and Next Steps

Accepting payments on e-commerce websites has never been an easy task for developers. While the web3 ecosystem allows storefronts to accept digital currencies, the availability of service-independent plugin solutions falls short. This article demonstrated a safe, simple, and custom way to request and receive crypto payments.

There’s room to take the approach a step or two further. Gas costs for ERC20 transfers on the Ethereum mainnet are exceeding our book prices by far. Crypto payments for low-priced items would make sense in gas-friendly environments like Gnosis Chain (their “native” Ether currency is DAI, so you wouldn’t even have to worry about stablecoin transfers here) or Arbitrum. You could also extend the backend with cart checkouts or use DEXes to swap any incoming ERC20 tokens into your preferred currency.

After all, the promise of web3 is to allow direct monetary transactions without middlemen and to add significant value to online stores that want to engage their crypto-savvy customers.

Credit: Source link

Using Insomnia to Upgrade Dependencies — With Confidence

Demo app: “Is Today My Birthday”
Demo app: “Is Today My Birthday”

Always keep your dependencies up to date. When you don’t upgrade, you miss out on bug fixes, security patches, and new features. You may even be up against an “end of life” deadline if the version of a package you use will soon no longer be supported.

If upgrading dependencies is so important, why don’t many developers do it? They may not know-how, or they may not understand the benefits of upgrading, or they may not feel like they have the time. Or, they may be afraid.

Why would developers be afraid to upgrade their dependencies? Because they think they might break something. And why are they afraid of breaking something? Because they don’t have good tests in place.

When you have a good test suite running against your codebase, you can upgrade your dependencies with confidence.

In this article, we’ll discuss semantic versioning, gotchas when upgrading dependencies, and how to upgrade dependencies with confidence. We’ll also use a small app to demonstrate how a good test suite can help you catch breaking changes from dependency upgrades before you deploy your app.

Semantic Versioning

Let’s briefly talk about semantic versioning and how it works. JavaScript packages typically follow semantic versioning, which is a set of three numbers representing the major, minor, and patch versions of the package. So if a package is set at version 2.4.1, then that’s major version 2, minor version 4, and patch version 1.

Patch versions typically include bug fixes and security patches. Minor versions can include new features. But neither patch versions nor minor versions are supposed to break or change the existing API of the package. Major versions can come with breaking changes, usually through removing an API method or significantly reworking the underlying architecture of the code.

Gotchas When Upgrading Dependencies

If package developers follow semantic versioning properly, it’s generally safe for consumers of those packages to upgrade minor and patch versions in their app, since by definition breaking changes are not allowed in those releases. However, some package maintainers may not follow this standard very well or may accidentally release breaking changes without realizing it, so you never know for sure. But generally speaking, upgrades to patch and minor versions of a dependency should go smoothly.

It’s the major version that you need to be more careful with. When upgrading a package from one major version to the next, it’s always a good idea to consult the change log or release notes to see what’s changed.

Sometimes, the breaking changes in a major release don’t impact you, like if you aren’t using an API method that’s now been removed. Other times the changes will be relevant, and you’ll need to follow a migration guide to see what changes you need to make in order to use the new major version correctly. For massive breaking changes, sometimes developers will be kind enough to provide you with a codemod, a script that performs most or all of the changes for you.

The good news is that upgrading dependencies, even major versions, doesn’t need to be a scary experience.

Upgrading Dependencies With Confidence

A test suite with high code coverage will benefit you greatly as you upgrade your dependencies. If your code is well covered by tests, then the tests should give you confidence that your app will still work properly after upgrading. If all the tests pass, you should feel confident that the upgrades went off without a hitch. If any tests fail, you know which areas of your app to focus on.

If you don’t have tests for your app, start writing them now! A good set of tests goes a long way — not just when upgrading dependencies, but also when refactoring existing code, writing new features, and fixing bugs.

Even with a good test suite, a small amount of manual testing after upgrading dependencies is also a good idea, just as an added safety measure. After all, there may be gaps in your test coverage or edge cases you hadn’t considered.

If you do find gaps in your test suite during manual testing, you should write a quick test for what you find and then go fix the issue. That way you now have an automated test to ensure that the particular bug you found doesn’t happen again in the future.

Demo Time

Let’s now consider a small demo app that will help these abstract ideas become more concrete. Here we have a mind-blowingly useful app, Is Today My Birthday. This app is the best, easiest, and fastest way to determine if today is your birthday. Simply input your birth date and today’s date, and the app will tell you if today is in fact your birthday.

Demo app in action: “Is Today My Birthday”
Demo app in action: “Is Today My Birthday”

Okay, I kid. But, we needed a simple app for demo purposes, so here we are.

This app is built with a Node.js and Express backend and a simple HTML, CSS, and vanilla JavaScript frontend. I used the date-fns package for working with dates, and I wrote API tests using Insomnia. I’m able to run the API tests from the command line using the Inso CLI, and I’ve even integrated them into a continuous integration pipeline with GitHub Actions. Pretty fancy, I know. You can view all of the code for this app on GitHub.

The relevant part of the code that determines if today is your birthday is reproduced below:

const format = require('date-fns/format');
const express = require('express');

const router = express.Router();

router.get("", function (req, res) {
  if (!req.query.birthday) {
    return res.json({ data: 'Please provide your birthdate' });

  const todaysDate = new Date( || new Date();
  const birthDate = new Date(req.query.birthday);

  const todaysMonthAndDay = format(todaysDate, 'MM-DD');
  const birthdayMonthAndDay = format(birthDate, 'MM-DD');

  const isTodayMyBirthday = todaysMonthAndDay === birthdayMonthAndDay;

  return res.json({ data: isTodayMyBirthday });

module.exports = router;

The output from the three tests we’ve written looks like this:
All three Insomnia tests are passing
 All three Insomnia tests are passing

So let’s consider for a moment what we might do when upgrading the version of date-fns that our app uses. I’ve purposefully used v1.30.1, to begin with, so that we can upgrade to v2.28.0 later. Going from v1 to v2 is a major release with breaking changes, and we’ll want to be sure that our app still works properly after we do our upgrades. If our app breaks after the upgrades, how will people ever be able to know if today is their birthday?

We’ll begin by changing the version of date-fns in our package.json file from v1.30.1 to v2.28.0. Then, we’ll run yarn install to install that new version.

After that, we can run our tests to see how things look:

Two tests are failing after upgrading the date-fns package
Two tests are failing after upgrading the date-fns package

Oh no — we have some failures! Two of our three tests have failed, and it looks like we have a bad JSON response coming from our API. While it’s no fun to deal with failing tests, our tests have proved useful in detecting an issue when upgrading date-fns from v1 to v2.

If we investigate further, we’ll find the following error from date-fns: “RangeError: Use `dd` instead of `DD` (in `MM-DD`) for formatting days of the month.”

Looking back at our code, we have indeed used MM-DD as our date format. Consulting the change log for the 2.0.0 release of date-fns, we can see that one of the breaking changes is that the use of uppercase DD has been replaced with lowercase dd when formatting months and days together. Thanks for the helpful tip, change log!

We can now make that simple change in our code so it looks like this:

const format = require('date-fns/format');
const express = require('express');

const router = express.Router();

router.get("", function (req, res) {
  if (!req.query.birthday) {
    return res.json({ data: 'Please provide your birthdate' });

  const todaysDate = new Date( || new Date();
  const birthDate = new Date(req.query.birthday);

  const todaysMonthAndDay = format(todaysDate, 'MM-dd');
  const birthdayMonthAndDay = format(birthDate, 'MM-dd');

  const isTodayMyBirthday = todaysMonthAndDay === birthdayMonthAndDay;

  return res.json({ data: isTodayMyBirthday });

module.exports = router;

We’ll then run our test suite again, and voila — all three tests are passing again. The order has been restored, and we’ve successfully upgraded one of the dependencies in our app.


Upgrading dependencies is important. Staying up to date means you have the latest bug fixes, security patches, and features. By frequently updating your dependencies at regular intervals (perhaps once per month or once per quarter), you can avoid the panic of needing to upgrade end-of-life packages at the last minute.

Remember that tests help you upgrade with confidence. So what are you waiting for? Go write some tests and upgrade your app’s dependencies now!

Credit: Source link

How Intrinsic Type Manipulations in TypeScript

Did you know that TypeScript comes with a bunch of built-in string manipulator types? That means we can easily transform a string type into uppercase, lowercase, or capitalized versions of itself. When might this be useful? In lots of situations where you have a fixed set of string values, for example, whenever you have a set of strings from an object using something like keyof.

To transform a string into a different version by defining a new type along with one of the type manipulators. For example, below, we transform myType into the capitalized version:

type myType = "hi there"
type capitalizedMyType = Capitalize<myType>

let myVar:capitalizedMyType = "Hi there";

In the above example, hi there would throw an error – only Hi there will work with a Capitalize type. Similarly, we can uncapitalize a string using Uncapitalize:

type myType = "HI THERE"
type uncapitalizedMyType = Uncapitalize<myType>

let myVar:uncapitalizedMyType = "hI THERE";

Above, only hI THERE will work. Any other case will fail the check.

Uppercase and Lowercase Intrinsic Type Manipulation

Along with Capitalize, we also have Uppercase and Lowercase. They both do exactly what you think they’d do – one changes all characters to uppercase, and the other to lowercase.

type myType = "this long sentence"
type bigMyType = Uppercase<myType>

let myVar:bigMyType = "THIS LONG SENTENCE"

Lowercase Intrinsic Type Manipulation

Above, we’ve created a type which is the uppercase version of myType. Here is the same example, but in the other direction: making the lowercase version:

type myType = "THIS LONG SENTENCE"
type smallMyType = Lowercase<myType>

let myVar:smallMyType = "this long sentence"

Using With Template Literals

These types become very useful when working with template literals, where we can enforce a string literal to use a lowercase version – for example, for an ID. Here is a version where we have a set number of strings, and we only want to use the lowercase version in the ID as part of the versionId type:

type versions = "DEV" | "PROD" | "STAGING"
type versionPrefix = "gold-" | "beta-" | "alpha-"
type versionId = `${versionPrefix}${Lowercase<versions>}`

let currentVersion:versionId = "gold-dev"

Above, we have two sets of union types, and we combine them into one versionId. Here, we only accept lowercase letters, but version is uppercase. We can transform it with Lowercase, so gold-dev is valid.

Using With keyof Objects

In a similar way, we may receive an object from an API or elsewhere which has an odd or uneven casing. To normalize this, we can use intrinsic type manipulations. Below, I am using keyof to get all the keys in myObject, and making them all lowercase too.

type myObject = {
    firstName: string,
    Age: number,
    Lastname: string

type keys = Lowercase<keyof myObject>

let getObjectKey:keys = "firstname"

keys only have the following valid values: firstname | age | lastname. In this example, using intrinsic string manipulation like this can be an easy way to create clean types when the data we have at our disposal has issues.

Credit: Source link

NativeScript vs. Flutter: A Comparison

With the growing demand for lifestyle and communication apps, mobile app development has become a booming industry. Building apps for both iOS and Android requires having two different teams with different skill sets. This can be a challenge for companies that might not have the resources to invest in two other teams.

This process can be pretty time-consuming and expensive, but there is a solution. Cross-platform app development with technology like Flutter and NativeScript can be a more cost-effective solution.

The popularity of cross-platform app development has exploded in recent years, thanks to the availability of powerful frameworks that make creating apps for multiple platforms easy. With so many options available, making the right choice between NativeScript vs. Flutter can be tricky! And where there are too many choices, getting confused is natural, isn’t it?

Nativescript and Flutter are both technologies that allow you to build cross-platform mobile apps. They both have their pros and cons, but in general, Nativescript is more potent while Flutter is flexible and easier to use. So, in this blog post, we’ll take a look at the similarities and differences between Nativescript and Flutter so you can decide which one is right for your next mobile app project.

All About Flutter 

Flutter, an open-source, cross-platform framework, uses Dart, a language created by Google. Flutter also provides a better UI toolkit for building cross-platform applications from one codebase. Flutter has many advantages, but the primary one is that it allows developers to create expressive and flexible UI with native performance.

Additionally, Flutter is backed by Google developers and a large community of developers who are constantly working to improve the platform. The Flutter team is very active in the community and has been very responsive to issues raised by the budding developers.  

Live Apps

Google Ads – To make ads work on both iOS and Android, Google used Dart packages, Firebase AdMob plugins, and static utility classes from Flutter.

Cryptograph – Cryptograph is an app that lets you keep track of a bunch of different cryptocurrencies, like Ethereum and Bitcoin. You can see how they’re doing and check market history and stuff.

Tencent – Tencent, a prominent Chinese company with international operations, has used Flutter to build digital products, such as DingDang, AITeacher, K12, QiDian, Mr. Translator, and Now Live.

Alibaba – The Alibaba Group implements the Flutter technology stack in a number of commercial contexts, from e-commerce, feed streams, gamification-based interaction, and internationalized services. – Using Flutter and existing infrastructure, built and released new advanced features and user experiences to production in just a few days.


Fast and simple development: Flutter’s Hot Reloading feature is beneficial because it shows code changes immediately on emulators, simulators, and hardware without losing the application state.

High productivity: Flutter’s cross-platform capabilities allow you to use the same codebase for your iOS and Android app, saving you time and resources.

Fast and simple development: Flutter’s Hot Reloading feature is beneficial because it shows code changes immediately on emulators, simulators, and hardware without losing the application state.

Compatibility: As widgets are part of the app and not the platform, there is a lower chance of encountering compatibility issues on different OS versions. This results in less testing time needed.

Open-source: Flutter and Dart are both open-source and free, and they have a lot of documentation and community support to assist with any problems you might run into.

Plentiful widgets: Flutter’s widgets are rich and follow the Cupertino (iOS) and Material Design (Android) guidelines.

Seamless integration: With Flutter, you can easily integrate your code with Java for Android and Swift/Objective C for iOS and do not require rewriting the code.

Codesharing: Flutter is perfect for MVP development because it allows you to write code once and share it across multiple platforms.


  • Flutter has some great libraries and tools, but it’s not as robust as React Native.

  • The one drawback of Flutter apps is that they are bigger than 4MB.

All About NativeScript

NativeScript is a framework that allows you to build native mobile apps using JavaScript. It provides you with the tools and APIs you need to develop fully functional native apps that run anywhere on iOS and Android. It also allows you to share code between your apps, which will help you build apps faster and with less code. 

With the power to transpile one programming language to another, NativeScript lets you develop genuinely native apps. Moreover, you can access the native APIs of devices directly using JavaScript, TypeScript, or Angular. 

Live Apps

Aura CO2: CO2 monitors the air you breathe, allowing you to make the right decisions to promote an air quality that is high quality and healthy.

Stonks Pro: The Meme investing Stock Market app allows users to browse dank memes and win the best amongst Leaderboards and their Friends. 

PUMA: Built High-Quality, Simple-Interface Mobile Apps in a Short Time with features including cloud storage, messaging, push notifications, and user authorization, among others.

SAP: has built a mobile development kit with NativeScript that allows higher code abstraction levels and enables JavaScript code rendering into native codes without influencing the system network.


Web skills: You can reuse web skills (JavaScript, CSS & HTML) and create truly native mobile apps for iOS and Android.

Quick to get started: Deep integration with popular JavaScript frameworks like Angular or Vue to minimize the learning curve.

Code sharing: You can share code between the web (when used with Angular) and mobile platforms, including UI.

Support: Dozens of online resources available for getting started, staying up-to-date, and troubleshooting.


  • Not all user interface components are free to use;
  • Hard to share code with web build;
  • Requires a long time to test apps;
  • Poor and sometimes buggy plugin support.
  • Low response time to new android features;

Flutter vs NativeScript Battle 


Flutter utilizes a hardware-accelerated Skia 2D graphics engine for rendering and aims to deliver 60 to 120 FPS on devices capable of 120Hz updates. On the other hand, NativeScript can also maintain 60 FPS. 


Flutter uses a layered architecture, allowing the use of simple or complex components as needed. In contrast, NativeScript follows the MVC or MVVM architectural pattern, enabling efficient module management and helping enterprise-grade applications have certain features.


Flutter offers a pack of testing features for widgets, integrations, and the entire app with clear documentation and also supports automated testing. Inversely, NativeScript now has its Quality Analysis workflow to follow since the version 2.5 release. 

So, Which Is Better: Flutter or NativeScript? 

When it comes to building a mobile app, it’s hard to choose one framework over another. Both Flutter and NativeScript have their strengths, but it’s important to consider your app’s needs and goals when deciding which framework to use. 

Flutter and NativeScript frameworks alike make it easy to build beautiful apps that run on multiple platforms. Both have a long way to go in terms of performance, infrastructure, and plugins, which will make your decision even more challenging. 

If you want to build a high-quality app for the mobile world, hiring Flutter developers is the right choice for you. Flutter is a complete framework with many more advanced testing and debugging features.      

Credit: Source link

Building MVP With React and Firebase

While a concept is in the ideation phase, it is wise to simplify and build a minimal core to see if it works per your needs. Moreover, a prototype will assist in knowing if there is any market for your product before you lose your resources. The creation of this minimal core is known as ‘Minimal Viable Product’ or MVP. Let us learn how to create an MVP using React and Firebase.

Why Should You Use Firebase?

Firebase is a well-equipped platform that serves as backend development software. Firebase provides various important options all under one software and helps to embrace

  • Database
  • Internet hosting
  • Authentication
  • Storage 
  • Analytics
  • Cloud features

 When using Firebase, you will have to pay attention to scale the reach of your product, as some limitations might come your way. When you think about your first MVP, Firebase can be the correct solution.

Setup for the Firebase

Launch the Firebase and sign up with your Google account and follow the following steps:

  • Make a Firebase project and name it ‘SitePointBooks.’
  • You are not required to enable Google Analytics when working on this project. Choose the create project Tab.
  • Build a web app and name it ‘Sitepoint-books-app’ in the console of Firebase.
  • The next step involves naming the app. This name can be the same as your project and then choose the register app option.
  • Now go to Add firebase SDK option, choose use npm and copy the required output. You can choose from the wide range of Firebase SDKs.
  • Next, you must note your firebase configuration and choose the “continue to console” option.

If you wish to complete the setup more efficiently, you can copy the required script to the firebase.js file.  

Cloud Firestore

We will be using cloud Firestore for a database that allows the users to develop and build the data using collections.

In the console, go to the Firestore database tab and choose to create a database option. A dialogue box will appear.

  • Select “start in test mode” on the very first page
  • On the second page, you must set the region for the database and choose the enable option.

After you have initialized the data, the next step involves increasing the population of the database. You will have to launch a second browser and copy the required ID from the record, whichever reference field is required. You can begin the process of populating the database.

Launch the Dev Server

After the database has been populated, you can now move ahead with executing the “npm run dev” command and search “localhost:3000” for the interaction with your project. Note that this is just a prototype, and it is just to provide you with a fair idea of how your application will work and some features that might not work.


Structuring a CRUD layout for a project consisting of more than one entity can make things go complicated very soon. Hence it is advised that for routing, you should use React Router to implement a routing structure using a standardized syntax. React JS development company offers this service if you are looking for such a service as an organization.

Database Service 

For the web platform or the application of Node.js, the package required to be installed in your project is an “official firebase package.” It is essential as all the tools necessary to connect all the backend operations are stored by this package making the work very efficient. Now, you will have to communicate with your Cloud Firestore database. Next, you will have to transfer the object called “DB” to any of the react containers, and after that, you can start the database querying.

List of Documents

Now that the database service is all set, we will have to call it from one of the containers, which can be, for example, “ScreenAuthorList.” Once all of the data is correctly undertaken, it will be supplied down with the help of props to a component of the presentation, the “authorlist.”

Now we will be required to use the react query to manage all the server data from the front-end state. It becomes much easier using this package than following any other method. 

Other Services of Firebase

Well, many Firebase services cannot be covered in a single article, given the wide range of services offered by Firebase. All the services are in some or other way very essential for the backend development for building your MVP app. Few of the services have already been mentioned in the article above; hence in this section, we will only cover the overview of these services. 

As already mentioned, the security rules we have configured give the public read and write access to our backend. So, to secure your firebase account and protect it from foreign intruders, you must thoroughly go through the security rules. To follow a more secure way to protect your firebase account, you will also be required to execute the firebase authentication, which will be helpful for your app. 


Now you know how to register on Firebase, link the collections to your UI, the process of populating the database, and many other essential things. This React integration with Firebase provides a well-equipped platform for developers to create and build MVPs more efficiently.

There are still many helpful tools and services offered by Firebase, which become valuable when we are in the process of developing an MVP. There are various firebase extensions also, which are helpful for the operation of developing MVP. Check them out here and make your next project a raging success.

Credit: Source link