Skip to main content

Β· 7 min read
Himank Chaudhary

Search is a fundamental part of an application especially when we build retail and e-commerce applications. Search provides your customers a great shopping experience. This blog will demonstrate how Tigris makes it super easy to add real-time and relevance-based search to your application. Tigris has an embedded search engine which automatically makes all your data searchable.

Tigris + NuxtJS

The article will be focusing mainly on integrating full-text search capabilities using Tigris and Nuxt.js and may skip over few things like styling etc. which will be pre-generated in the template used for this tutorial.

Here is a link to working example of this e-commerce store that you will build. The source code is available on GitHub repo if you feel like exploring on your own, else follow along the tutorial.

Prerequisites​

You'll need a few things installed

Getting Started​

The first step is to clone the repository that contains the starting source code.

Terminal
git clone -b ecommerce-search-scaffold git@github.com:tigrisdata/tigris-netlify-ecommerce.git

cd into the project directory

cd tigris-netlify-ecommerce

The layout of this project is like below.

tigris-netlify-ecommerce
β”œβ”€β”€ package.json
└── pages
β”œβ”€β”€ all.vue
β”œβ”€β”€ cart.vue
β”œβ”€β”€ index.vue
β”œβ”€β”€ women.vue
└── men.vue
└── layouts
β”œβ”€β”€ default.vue
└── static
β”œβ”€β”€ storedata.json
└── functions
β”œβ”€β”€ read-all-products.ts
β”œβ”€β”€ create-payment-intent.ts
└── handle-payment-succeeded.ts
β”œβ”€β”€ models
β”‚ └── tigris
β”‚ └── catalog
β”‚ └── products.ts
└── store
└── index.js
  • package.json - Configuration for the Node project
  • pages/ - This is where all the vue files that encapsulate the template, logic, and styling of a Vue component
  • functions/ - All serverless functions(API endpoints) for the application are defined in this directory
  • models/tigris/catalog/ - To manage schema of this application. Database is catalog and collection is products
  • store/ - Vuex store

Create a database, collection and load the dataset​

We will connect to Tigris Cloud to run this application. Let's first create an application key for your project to get TIGRIS_CLIENT_ID and TIGRIS_CLIENT_SECRET.

Terminal
tigris create application ecommerce_search "search tutorial"
Output
Terminal
{
"id": "dummy-id",
"name": "ecommerce_search",
"description": "search tutorial",
"secret": "dummy-secret",
"created_at": 1668493288000,
"created_by": "google-oauth2|107496644751065904534"
}

Once done, you can create a .env and copy "id" to TIGRIS_CLIENT_ID and "secret" to TIGRIS_CLIENT_SECRET from above.

Terminal
TIGRIS_URI=api.preview.tigrisdata.cloud
TIGRIS_CLIENT_ID=<copy id from above>
TIGRIS_CLIENT_SECRET=<copy secret from above>

After this just run the load target and that will automatically create your database and collection and will load the data present here.

Terminal
npm run load
Output
Terminal
> ecommerce-netlify@1.0.0 load
> npm run setup:dev


> ecommerce-netlify@1.0.0 setup:dev
> NODE_ENV=development npm run setup


> ecommerce-netlify@1.0.0 setup
> npx ts-node scripts/setup.ts

event - Scanning /Users/himank/tigris-netlify-ecommerce/models/tigris for Tigris schema definitions
info - Found DB definition catalog
info - Found Schema file products.ts in catalog
info - Found schema definition: ProductSchema
debug - Generated Tigris Manifest: [{"dbName":"catalog","collections":[{"collectionName":"products","schema":{"id":{"type":"string","primary_key":{"order":1}},"color":{"type":"string"},"description":{"type":"string"},"gender":{"type":"string"},"name":{"type":"string"},"review":{"type":"string"},"starrating":{"type":"number"},"price":{"type":"number"},"sizes":{"type":"array","items":{"type":"string"}},"img":{"type":"string"}},"schemaName":"ProductSchema"}]}]
event - Created database: catalog
debug - {"title":"products","additionalProperties":false,"type":"object","properties":{"id":{"type":"string"},"color":{"type":"string"},"description":{"type":"string"},"gender":{"type":"string"},"name":{"type":"string"},"review":{"type":"string"},"starrating":{"type":"number"},"price":{"type":"number"},"sizes":{"type":"array","items":{"type":"string"}},"img":{"type":"string"}},"collection_type":"documents","primary_key":["id"]}
event - Created collection: products from schema: ProductSchema in db: catalog
Inserted 30 documents
Setup complete ...

Add full-text search capabilities​

To add full-text search to our application, we only need to perform three steps:

  • Serverless functions to call Tigris search
  • Async action in vuex store to call Tigris search serverless function
  • Search vue to have search text in the UI

Let's write a serverless function to add search functionality to the e-commerce store. This serverless function will be used by the vuex store to power search functionality for the application.

⌲ Add the following code inside functions/search-products.ts.​

functions/search-products.ts
import { Handler } from "@netlify/functions";
import { Tigris } from "@tigrisdata/core";
import { Product } from "~/models/tigris/catalog/products";

const tigris = new Tigris();

const handler: Handler = async (event, context) => {
const searchReq = JSON.parse(event.body);

if (!searchReq.q) {
console.log("search keyword is missing");
return {
statusCode: 400,
body: JSON.stringify({
status: "search keyword is missing",
}),
};
}

try {
const products = tigris
.getDatabase("catalog")
.getCollection<Product>("products");

const searchResult = await products.search(searchReq);

const productHits = new Array();
for (const hit of searchResult.hits) {
productHits.push(hit.document);
}
return {
statusCode: 200,
body: JSON.stringify(productHits),
};
} catch (err) {
console.log(err);
return {
statusCode: 500,
body: JSON.stringify({
status: err,
}),
};
}
};

export { handler };

The main thing to note in the above serverless function is that we are simply calling search on the product collection.

Step2: Integrate Search Serverless functions in vuex store​

The next step is to integrate the serverless function that we have just added above in the vuex store. Here we will be adding an async action searchProducts. As you can notice in the following code that this async action is passing the keyword to the serverless function that we have added above. This keyword is the text that user wants to search in the application. We will see in Step3 on how the vue is passing the text to this async action.

⌲ Add the following code inside actions export const actions = {...} in store/index.ts​

searchProducts
async searchProducts({ commit }, keyword) {
try {
const response = await axios.post(
"/.netlify/functions/search-products",
{
q: keyword,
},
{
headers: {
"Content-Type": "application/json"
}
}
);
if (response.data) {
commit("searchProducts", response.data);
}
} catch (e) {
console.log("error", e);
}
}

The next step is to update the mutations based on the actions that we have added. Add the searchProducts in the export const mutations = {...} by adding the following code.

searchProducts
searchProducts: (state, payload) => {
state.searchdata = payload;
};

Note: Add a new variable "searchdata" in the state so that mutations can update it.

Add a searchResult inside getters export const getters = {...} to access search results

searchResult
searchResult: (state) => state.searchdata;

Step3: Search vue to have search text in the UI​

Create a vue file and add the following code to it.

pages/search.vue
<template>
<div>
<div class="searchHeader">
<input type="text" v-model="keyword" placeholder="Search Keyword" />
<button
class="searchBtn"
@click="search"
:disabled="loading || !keyword"
>{{(!loading) ? 'Search Products' : 'Loading...'}}</button>
</div>

<p class="noResults" v-if="usingSearch && !loading && searchResult.length<1">No results found..</p>

<app-store-grid :data="(usingSearch) ? searchResult : storedata" />
</div>
</template>
<script>
import AppStoreGrid from "~/components/AppStoreGrid.vue"
import { mapGetters, mapState } from 'vuex';
export default {
components: {
AppStoreGrid
},
computed: {
...mapGetters(["searchResult"]),
...mapState(["storedata"])
},
data() {
return {
keyword: "",
error: "",
loading: false,
usingSearch: false,
};
},
methods: {
search() {
this.loading = true;
this.usingSearch = true;
this.$store.dispatch("searchProducts", this.keyword).
then(() => {
this.loading = false;
})
}
}
};
</script>

<style lang="scss" scoped>
.noResults {
text-align: center;
}
.searchBtn {
width: 180px;
}
.searchHeader {
display: flex;
justify-content: center;
gap: 10px;
margin-bottom: 40px;
}
</style>

Now, add this search.vue to the AppNav.vue component.

<li>
<nuxt-link to="/search">Search</nuxt-link>
</li>

At this point, you have successfully integrated Search in your application. You can also check out the full code here.

Run the app​

Let's reap the rewards. Run netlify dev using netlify CLI in terminal.

You should see following output:

Terminal
netlify dev
Output
Terminal
β—ˆ Netlify Dev β—ˆ
β—ˆ Ignored netlify.toml file env var: TIGRIS_URI (defined in .env file)
β—ˆ Injected .env file env var: TIGRIS_URI
β—ˆ Ignored general context env var: LANG (defined in process)
β—ˆ Injected .env file env var: TIGRIS_CLIENT_ID
β—ˆ Injected .env file env var: TIGRIS_CLIENT_SECRET
β—ˆ Loaded function create-payment-intent.
β—ˆ Loaded function handle-payment-succeeded.
β—ˆ Loaded function read-all-products.
β—ˆ Functions server is listening on 50405
β—ˆ Setting up local development server

────────────────────────────────────────────────────────────────
Netlify Build
────────────────────────────────────────────────────────────────

❯ Version
@netlify/build 27.20.1

❯ Flags
{}

❯ Current directory
/Users/himank/tigris-netlify-ecommerce

❯ Config file
/Users/himank/tigris-netlify-ecommerce/netlify.toml

❯ Context
dev

────────────────────────────────────────────────────────────────
1. Run command for local development
────────────────────────────────────────────────────────────────

β—ˆ Starting Netlify Dev with Nuxt 2
yarn run v1.22.19
warning ../package.json: No license field
$ nuxt dev
β„Ή Listening on: http://localhost:3000/
β„Ή Preparing project for development
β„Ή Initial build may take a while
βœ” Builder initialized
βœ” Waiting for framework port 3000. This can be configured using the 'targetPort' property in the netlify.toml

(dev.command completed in 2s)
βœ” Nuxt files generated

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ β”‚
β”‚ β—ˆ Server now ready on http://localhost:8888 β”‚
β”‚ β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Voila! there you have it. E-commerce store is accessible on http://localhost:8888 on your browser, go ahead and play around.

Summary​

Tigris has an embedded search engine which automatically makes all your data searchable. This blog demonstrated that adding search functionality in your application using Tigris search is super easy, and everything happened in the code. You can also check out this product catalog in Tigris console.

Β· One min read
Adil Ansari

Happy to announce that Next.js now has an officially supported example for bootstrapping your projects using Tigris. Creating a project with Next.js and Tigris is as simple as:

$ npx create-next-app --example with-tigris

Getting started​

Next.js allows you to bootstrap Next.js apps using create-next-app utility.

Install Tigris​

Startup Tigris dev environment​

$ tigris dev start

Initialize your Next.js app​

Use the official create-next-app utility

$ npx create-next-app --example with-tigris
Output
Terminal
$ βž”  npx create-next-app --example with-tigris
βœ” What is your project named? … my-tigris-app
Creating a new Next.js app in ./my-tigris-app.

Downloading files for example with-tigris. This might take a moment.

Installing packages. This might take a couple of minutes.


added 67 packages, and audited 68 packages in 4s

5 packages are looking for funding
run `npm fund` for details

found 0 vulnerabilities

Initialized a git repository.

Success! Created my-tigris-app at ./my-tigris-app
Inside that directory, you can run several commands:

npm run dev
Starts the development server.

npm run build
Builds the app for production.

npm start
Runs the built app in production mode.

We suggest that you begin by typing:

cd my-tigris-app
npm run dev

Run the app​

Run the app and see it will automatically create your databases and collections.

$ cd my-tigris-app
$ npm run dev
Output
Terminal
$ βž”  cd my-tigris-app
$ βž” npm run dev Β±[main]

> predev
> APP_ENV=development npm run setup


> setup
> npx ts-node scripts/setup.ts

Loaded env from ./my-tigris-app/.env.development
event - Scanning ./my-tigris-app/models/tigris for Tigris schema definitions
info - Found DB definition todoStarterApp
info - Found Schema file todoItems.ts in todoStarterApp
info - Found schema definition: TodoItemSchema
debug - Generated Tigris Manifest: [{"dbName":"todoStarterApp","collections":[{"collectionName":"todoItems","schema":{"id":{"type":"int32","primary_key":{"order":1,"autoGenerate":true}},"text":{"type":"string"},"completed":{"type":"boolean"}},"schemaName":"TodoItemSchema"}]}]
event - Created database: todoStarterApp
debug - {"title":"todoItems","additionalProperties":false,"type":"object","properties":{"id":{"type":"integer","format":"int32","autoGenerate":true},"text":{"type":"string"},"completed":{"type":"boolean"}},"collection_type":"documents","primary_key":["id"]}
event - Created collection: todoItems from schema: TodoItemSchema in db: todoStarterApp

> dev
> next dev

ready - started server on 0.0.0.0:3000, url: http://localhost:3000
info - Loaded env from ./my-tigris-app/.env.development
event - compiled client and server successfully in 856 ms (154 modules)

Β· 5 min read
Robert Barabas

In this blog we focus on how we can dump and restore data stored in Tigris Server, an Open Source Database, using Tigris CLI. This will be demonstrated using our public beta environment.

Overview

At a high level the process looks as follows:

  • Setting up Tigris CLI
  • Authentication via the CLI
  • Performing the data dump or restore

Setting up Tigris CLI

We will need to grab the CLI binary that fits your platform and architecture. Several operating systems and architectures are supported.

I'm going to demonstrate the process using an M1 Mac laptop but the process should be similar on other platforms.

Mac​

$ curl -sSL https://tigris.dev/cli-macos | sudo tar -xz -C /usr/local/bin

Alternatively, if you are a Homebrew user you may also use:

$ brew install tigrisdata/tigris/tigris-cli

Linux​

$ curl -sSL https://tigris.dev/cli-linux | sudo tar -xz -C /usr/local/bin

Windows​

C:\Users\robert> curl -sSLO https://tigris.dev/cli-windows
C:\Users\robert> tar xvzf cli-windows

Authentication

In order to interact with the public beta environment, we need to first authenticate our client with Tigris.

You can run the following CLI command to initiate login:

$ tigris login

This should initiate browser based authentication:

Tigris Login Window

Pick whichever authentication method is applicable to your user to authenticate.

A successful setup should yield the following message in the browser:

Tigris Login Succeeded

On the command line the utility should also inform you that authentication was successful and your credentials have been saved:

❯ tigris login
Opening login page in the browser. Please continue login flow there.
Successfully logged in

❯ cat ~/.tigris/tigris-cli.yaml
token: <redacted>
url: api.preview.tigrisdata.cloud

ℹ️ Your authentication process may look different if you are self-hosting Tigris. For instance, you will not have to authenticate if authentication has not been setup.

Performing Data Dump

We are going to use a simple script to dump all the contents into individual JSON files in the current working directory:

$ cat dump_all.sh
#!/usr/bin/env bash

# Increase Tigris Timeout
TIGRIS_TIMEOUT=1h
export TIGRIS_TIMEOUT

for DATABASE in `tigris list databases`
do
echo " [.] Dumping schema of db ${DATABASE}"
tigris describe database "${DATABASE}" --schema-only > "${DATABASE}.schema"
for COLLECTION in `tigris list collections "${DATABASE}"`
do
echo " [*] Backing up collection ${DATABASE}:${COLLECTION}"
tigris read "${DATABASE}" "${COLLECTION}" > "${DATABASE}.${COLLECTION}.json"
done
done

Above script increases the timeout and dumps the data in all the databases and their collections that you have access to into an individual JSON file, one by one.

For example:

❯ ./dump_all.sh
[.] Dumping schema of db tigris_netlify_starter
[*] Backing up collection tigris_netlify_starter:todoItems
[.] Dumping schema of db tigris_starter_ts
[*] Backing up collection tigris_starter_ts:orders
[*] Backing up collection tigris_starter_ts:products
[*] Backing up collection tigris_starter_ts:social_messages
[*] Backing up collection tigris_starter_ts:user_events
[*] Backing up collection tigris_starter_ts:users
[.] Dumping schema of db tigris_vercel_starter
[*] Backing up collection tigris_vercel_starter:todoItems
[.] Dumping schema of db tigris_starter_java
[*] Backing up collection tigris_starter_java:orders
[*] Backing up collection tigris_starter_java:product_collection
[*] Backing up collection tigris_starter_java:users
[.] Dumping schema of db meteorites
[*] Backing up collection meteorites:landings
[.] Dumping schema of db ycsb_tigris
[*] Backing up collection ycsb_tigris:user_tables
[.] Dumping schema of db auth0
[*] Backing up collection auth0:users
[.] Dumping schema of db catalog
[*] Backing up collection catalog:products

❯ ls -la *.json *.schema
-rw-r--r-- 1 rbarabas staff 619 Nov 8 16:02 auth0.schema
-rw-r--r-- 1 rbarabas staff 49234 Nov 8 16:02 auth0.users.json
-rw-r--r-- 1 rbarabas staff 13866 Nov 8 16:02 catalog.products.json
-rw-r--r-- 1 rbarabas staff 424 Nov 8 16:02 catalog.schema
-rw-r--r-- 1 rbarabas staff 8892587 Nov 8 16:02 meteorites.landings.json
-rw-r--r-- 1 rbarabas staff 419 Nov 8 16:02 meteorites.schema
-rw-r--r-- 1 rbarabas staff 250 Nov 8 16:01 tigris_netlify_starter.schema
-rw-r--r-- 1 rbarabas staff 0 Nov 8 16:01 tigris_netlify_starter.todoItems.json
-rw-r--r-- 1 rbarabas staff 0 Nov 8 16:02 tigris_starter_java.orders.json
-rw-r--r-- 1 rbarabas staff 0 Nov 8 16:02 tigris_starter_java.product_collection.json
-rw-r--r-- 1 rbarabas staff 1722 Nov 8 16:02 tigris_starter_java.schema
-rw-r--r-- 1 rbarabas staff 173 Nov 8 16:02 tigris_starter_java.users.json
-rw-r--r-- 1 rbarabas staff 0 Nov 8 16:01 tigris_starter_ts.orders.json
-rw-r--r-- 1 rbarabas staff 0 Nov 8 16:01 tigris_starter_ts.products.json
-rw-r--r-- 1 rbarabas staff 1400 Nov 8 16:01 tigris_starter_ts.schema
-rw-r--r-- 1 rbarabas staff 553 Nov 8 16:01 tigris_starter_ts.social_messages.json
-rw-r--r-- 1 rbarabas staff 1161 Nov 8 16:02 tigris_starter_ts.user_events.json
-rw-r--r-- 1 rbarabas staff 563 Nov 8 16:02 tigris_starter_ts.users.json
-rw-r--r-- 1 rbarabas staff 250 Nov 8 16:02 tigris_vercel_starter.schema
-rw-r--r-- 1 rbarabas staff 172 Nov 8 16:02 tigris_vercel_starter.todoItems.json
-rw-r--r-- 1 rbarabas staff 545 Nov 8 16:02 ycsb_tigris.schema
-rw-r--r-- 1 rbarabas staff 7570000 Nov 8 16:02 ycsb_tigris.user_tables.json

Restore

To make the process a bit more interesting, in this example we are going to do a partial restore to an empty database. Full restore can be scripted similar to the data dump.

❯ tigris list databases
❯

First, let's make sure we will not time out during restore:

❯ export TIGRIS_TIMEOUT=1h

Next, let's create the database using the schema we captured prior:

❯ tigris create database tigris_vercel_starter
❯ tigris list databases
tigris_vercel_starter

Also, its collections

❯ cat tigris_vercel_starter.schema | tigris create collection tigris_vercel_starter
❯ tigris list collections tigris_vercel_starter
todoItems

Lastly, load the data into the database:

❯ cat tigris_vercel_starter.todoItems.json | tigris insert tigris_vercel_starter todoItems
❯ tigris read tigris_vercel_starter todoItems
{"text":"Pasta","completed":true,"id":6}
{"text":"Celery","completed":true,"id":10}
{"text":"Bread","completed":false,"id":12}
{"text":"Grocery","completed":false,"id":13}

Summary

I hope above process gave you a good enough idea on how to perform data dumps out an into Tigris. If this article piqued your interest and would like to read more on operational matters, please let us know on our community interfaces! Join our Tigris Community Slack or Tigris Discord Server!


Tigris is the data platform built for developers! Use it as a scalable, ACID transactional, real-time backend for your serverless applications. Build data-rich features without worrying about slow queries or missing indexes. Seamlessly implement search within your applications with its embedded search engine. Connect serverless functions with its event streams to build highly responsive applications that scale automatically.

Sign up for the beta

Get early access and try out Tigris for your next application. Join our Slack or Discord community to ask any questions you might have.

```

Β· 11 min read
Robert Barabas

This blog outlines the deployment of Tigris on an Google's Kubernetes Engine (GKE) Autopilot instance.

The installation will use recommended settings for redundancy, allocating more resources than a simple laptop based installation would. For more information on the laptop based installation please consult our previous blog!

If you would rather watch a video, check out the deployment in action on YouTube:

Requirements

Below are the requirements for the installation box and the target Kubernetes environment.

The list of items required:

  • Helm
  • Google Cloud SDK
  • git and tigris-deploy repository
  • GKE cluster with sufficient quotas

Installation Host​

We will require Helm to perform the installation. It is assumed that the installation host already has access to the deployment target GKE cluster.

The version of helm used in this blog was:

❯ helm version
version.BuildInfo{Version:"v3.10.1", GitCommit:"9f88ccb6aee40b9a0535fcc7efea6055e1ef72c9", GitTreeState:"clean", GoVersion:"go1.19.2"}

To interface with the GKE cluster using kubectl conveniently, you may want to install the GKE plugin. You can install it with this command:

❯ gcloud components install gke-gcloud-auth-plugin

GKE​

Fortunately, GKE Autopilot clusters automatically comes with a set of controllers installed. The list includes GKE Ingress that enables the creation of external load balancers for Ingress resources and controllers that manage other aspects of GCP, such as persistent disks.

One of the challenges of ensuring a successful deployment in GCP is to manage quotas efficiently. You will want to ensure quotas allow for sufficient CPU and SSD storage allocation.

Using the defaults of the Helm Chart, the following quotas proved to be sufficient:

GCP Quotas

Deployment

The installation deploys the following components:

  • Kubernetes Operator for FoundationDB
  • FoundationDB
  • Tigris Search (TypeSense)
  • Tigris Server

You can install the components individually or together, using the encompassing tigris-stack Helm Chart. Below I’m going to use this Chart to install Tigris.

Prepare For Deployment​

Next, check out the deploy script repository:

❯ git clone git@github.com:tigrisdata/tigris-deploy.git
Cloning into 'tigris-deploy'...
remote: Enumerating objects: 177, done.
remote: Counting objects: 100% (97/97), done.
remote: Compressing objects: 100% (60/60), done.
remote: Total 177 (delta 43), reused 68 (delta 34), pack-reused 80
Receiving objects: 100% (177/177), 87.68 KiB | 568.00 KiB/s, done.
Resolving deltas: 100% (63/63), done.

Navigate to the folder which contains the helm chart of tigris-stack:

❯ cd tigris-deploy/helm/tigris-stack

Deploy Tigris Stack​

To ensure there is initial quorum for Tigris Search, we should deploy it initially with a single replica.

❯ helm install tigris-stack . --set tigris-search.replicas=1
W1103 11:56:22.823655 12264 gcp.go:119] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.26+; use gcloud instead.
To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
W1103 11:56:30.072806 12264 warnings.go:70] Autopilot increased resource requests for Deployment default/tigris-server to meet requirements. See http://g.co/gke/autopilot-resources.
W1103 11:56:30.089432 12264 warnings.go:70] Autopilot increased resource requests for Deployment default/tigris-stack-fdb-operator to meet requirements. See http://g.co/gke/autopilot-resources.
W1103 11:56:30.232424 12264 warnings.go:70] Autopilot set default resource requests on StatefulSet default/tigris-search for container tigris-ts-node-mgr, as resource requests were not specified, and adjusted resource requests to meet requirements. See http://g.co/gke/autopilot-defaults and http://g.co/gke/autopilot-resources.
NAME: tigris-stack
LAST DEPLOYED: Thu Nov 3 11:56:25 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

At this point your cluster will likely only have a few nodes:

❯ kubectl get nodes
W1103 11:57:04.068108 12352 gcp.go:119] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.26+; use gcloud instead.
To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
NAME STATUS ROLES AGE VERSION
gk3-doc-default-pool-ddd321b8-4v8x Ready <none> 42h v1.23.8-gke.1900
gk3-doc-default-pool-e88cea62-9b77 Ready <none> 42h v1.23.8-gke.1900

The pods will be in the Pending state and trigger pod scale-ups:

❯ kubectl get pods
W1103 11:56:43.749022 12327 gcp.go:119] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.26+; use gcloud instead.
To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
NAME READY STATUS RESTARTS AGE
tigris-search-0 0/2 Pending 0 14s
tigris-server-8646cb4b7b-fz6h4 0/1 Pending 0 14s
tigris-server-8646cb4b7b-hmxj9 0/1 Pending 0 14s
tigris-server-8646cb4b7b-qsjw7 0/1 Pending 0 14s
tigris-stack-fdb-operator-8fd845b9-wb4r5 0/1 Pending 0 14s


❯ kubectl describe pod tigris-search-0 | tail
W1103 11:58:18.395905 12695 gcp.go:119] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.26+; use gcloud instead.
To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
Node-Selectors: <none>
Tolerations: kubernetes.io/arch=amd64:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 108s gke.io/optimize-utilization-scheduler 0/2 nodes are available: 2 Insufficient cpu, 2 Insufficient memory.
Warning FailedScheduling 38s gke.io/optimize-utilization-scheduler 0/3 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate, 2 Insufficient cpu, 2 Insufficient memory.
Normal TriggeredScaleUp 26s cluster-autoscaler pod triggered scale-up: [{https://www.googleapis.com/compute/v1/projects/mystic-berm-360500/zones/us-west2-a/instanceGroups/gk3-doc-nap-10cyk06a-9f9e9a3f-grp 0->1 (max: 1000)}]

Tigris will restart a few times before it changes state to Running. This is due to the unavailability of FoundationDB, the key-value store Tigris uses for persistence.

As you can see below, fdb is still in a Pending state when the tigris-server Pods are already up:

❯ kubectl get pods
W1103 12:05:30.762386 14893 gcp.go:119] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.26+; use gcloud instead.
To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
NAME READY STATUS RESTARTS AGE
fdb-cluster-log-1 0/2 Pending 0 43s
fdb-cluster-log-2 0/2 Pending 0 43s
fdb-cluster-log-3 0/2 Pending 0 42s
fdb-cluster-log-4 0/2 Pending 0 42s
fdb-cluster-log-5 0/2 Pending 0 42s
fdb-cluster-stateless-1 0/2 Pending 0 43s
fdb-cluster-stateless-10 0/2 Pending 0 43s
fdb-cluster-stateless-2 0/2 Pending 0 43s
fdb-cluster-stateless-3 0/2 Pending 0 43s
fdb-cluster-stateless-4 0/2 Pending 0 43s
fdb-cluster-stateless-5 0/2 Pending 0 43s
fdb-cluster-stateless-6 0/2 Pending 0 43s
fdb-cluster-stateless-7 0/2 Pending 0 43s
fdb-cluster-stateless-8 0/2 Pending 0 43s
fdb-cluster-stateless-9 0/2 Pending 0 43s
fdb-cluster-storage-1 0/2 Pending 0 43s
fdb-cluster-storage-2 0/2 Pending 0 43s
fdb-cluster-storage-3 0/2 Pending 0 43s
fdb-cluster-storage-4 0/2 Pending 0 43s
fdb-cluster-storage-5 0/2 Pending 0 43s
tigris-search-0 2/2 Running 1 (5m49s ago) 9m1s
tigris-server-8646cb4b7b-fz6h4 0/1 ContainerCreating 0 9m1s
tigris-server-8646cb4b7b-hmxj9 0/1 CrashLoopBackOff 1 (6s ago) 9m1s
tigris-server-8646cb4b7b-qsjw7 0/1 CrashLoopBackOff 2 (7s ago) 9m1s
tigris-stack-fdb-operator-8fd845b9-zgr4t 1/1 Running 0 5m55s

:info: You can improve the deployment sequence by using more sophisticated deployment methods, such as Synchronization Waves in ArgoCD!

Give Autopilot enough time to scale up nodes for the deployment. FoundationDB will likely trigger a separate scale-up event on its own.

❯ kubectl get nodes
W1103 12:09:59.375610 16639 gcp.go:119] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.26+; use gcloud instead.
To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
NAME STATUS ROLES AGE VERSION
gk3-doc-default-pool-ddd321b8-4v8x Ready <none> 42h v1.23.8-gke.1900
gk3-doc-default-pool-e88cea62-9b77 Ready <none> 42h v1.23.8-gke.1900
gk3-doc-nap-10cyk06a-854c84a8-4qss Ready <none> 4m23s v1.23.8-gke.1900
gk3-doc-nap-10cyk06a-854c84a8-6fd2 Ready <none> 4m21s v1.23.8-gke.1900
gk3-doc-nap-10cyk06a-854c84a8-m6hp Ready <none> 4m23s v1.23.8-gke.1900
gk3-doc-nap-10cyk06a-854c84a8-p8zq Ready <none> 4m21s v1.23.8-gke.1900
gk3-doc-nap-10cyk06a-854c84a8-r744 Ready <none> 4m22s v1.23.8-gke.1900
gk3-doc-nap-10cyk06a-854c84a8-xj5b Ready <none> 4m20s v1.23.8-gke.1900
gk3-doc-nap-10cyk06a-9f9e9a3f-4m2r Ready <none> 4m18s v1.23.8-gke.1900
gk3-doc-nap-10cyk06a-9f9e9a3f-d6nm Ready <none> 4m18s v1.23.8-gke.1900
gk3-doc-nap-10cyk06a-9f9e9a3f-ggxv Ready <none> 4m17s v1.23.8-gke.1900
gk3-doc-nap-10cyk06a-9f9e9a3f-lfwl Ready <none> 4m18s v1.23.8-gke.1900
gk3-doc-nap-10cyk06a-9f9e9a3f-s456 Ready <none> 4m18s v1.23.8-gke.1900
gk3-doc-nap-10cyk06a-9f9e9a3f-slg8 Ready <none> 4m19s v1.23.8-gke.1900
gk3-doc-nap-10cyk06a-9f9e9a3f-vg27 Ready <none> 11m v1.23.8-gke.1900
gk3-doc-nap-10cyk06a-9f9e9a3f-xf4k Ready <none> 4m18s v1.23.8-gke.1900
gk3-doc-nap-10cyk06a-9f9e9a3f-xptm Ready <none> 4m18s v1.23.8-gke.1900
gk3-doc-nap-10cyk06a-c0284c87-5hpx Ready <none> 4m13s v1.23.8-gke.1900
gk3-doc-nap-10cyk06a-c0284c87-96c2 Ready <none> 4m12s v1.23.8-gke.1900
gk3-doc-nap-10cyk06a-c0284c87-c7h8 Ready <none> 4m13s v1.23.8-gke.1900
gk3-doc-nap-10cyk06a-c0284c87-klm4 Ready <none> 4m12s v1.23.8-gke.1900
gk3-doc-nap-10cyk06a-c0284c87-mrqp Ready <none> 4m12s v1.23.8-gke.1900
gk3-doc-nap-10cyk06a-c0284c87-wwj2 Ready <none> 4m12s v1.23.8-gke.1900
gk3-doc-nap-qm2jb0jm-1393ada1-bgwt Ready <none> 11m v1.23.8-gke.1900
gk3-doc-nap-qm2jb0jm-6d70fd3a-pxdr Ready <none> 12m v1.23.8-gke.1900

Following the scale up of the nodes, the services to slowly also come up. As it is waiting for foundational services to start,

However, after about 15 minutes the Pods should become available:

❯ kubectl get pods
W1103 12:10:45.077224 16929 gcp.go:119] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.26+; use gcloud instead.
To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
NAME READY STATUS RESTARTS AGE
fdb-cluster-log-1 2/2 Running 0 5m57s
fdb-cluster-log-2 2/2 Running 0 5m57s
fdb-cluster-log-3 2/2 Running 0 5m56s
fdb-cluster-log-4 2/2 Running 0 5m56s
fdb-cluster-log-5 2/2 Running 0 5m56s
fdb-cluster-stateless-1 2/2 Running 0 5m57s
fdb-cluster-stateless-10 2/2 Running 0 5m57s
fdb-cluster-stateless-2 2/2 Running 0 5m57s
fdb-cluster-stateless-3 2/2 Running 0 5m57s
fdb-cluster-stateless-4 2/2 Running 0 5m57s
fdb-cluster-stateless-5 2/2 Running 0 5m57s
fdb-cluster-stateless-6 2/2 Running 0 5m57s
fdb-cluster-stateless-7 2/2 Running 0 5m57s
fdb-cluster-stateless-8 2/2 Running 0 5m57s
fdb-cluster-stateless-9 2/2 Running 0 5m57s
fdb-cluster-storage-1 2/2 Running 0 5m57s
fdb-cluster-storage-2 2/2 Running 0 5m57s
fdb-cluster-storage-3 2/2 Running 0 5m57s
fdb-cluster-storage-4 2/2 Running 0 5m57s
fdb-cluster-storage-5 2/2 Running 0 5m57s
tigris-search-0 2/2 Running 1 (11m ago) 14m
tigris-server-8646cb4b7b-95lcf 1/1 Running 0 2m37s
tigris-server-8646cb4b7b-gff64 1/1 Running 2 (3m12s ago) 3m23s
tigris-server-8646cb4b7b-hmxj9 1/1 Running 5 (3m59s ago) 14m
tigris-stack-fdb-operator-8fd845b9-zgr4t 1/1 Running 0 11m

That’s it, your Tigris deployment should be now on its way coming up!

Validate Deployment​

This time we are going to validate Tigris Server using the Tigris CLI, using a small linux Pod that was deployed in the same namespace as the Tigris Stack.

First we need to install the CLI:

$ curl -sSL https://tigris.dev/cli-linux | sudo tar -xz -C /usr/local/bin
...
$ ls -la /usr/local/bin/tigris
-rwxr-xr-x 1 1001 121 17264640 Nov 3 07:21 /usr/local/bin/tigris

Set TIGRIS_URL to point at the Service endpoint of tigris-server:

$ export TIGRIS_URL=http://tigris-http:80

After that see if you can interact with the Tigris database using the tigris utility:

$ tigris quota limits
{
"ReadUnits": 100,
"WriteUnits": 25,
"StorageSize": 104857600
}

$ tigris server info
{
"server_version": "v1.0.0-beta.17"
}

$ tigris server version
tigris server version at http://tigris-http:80 is v1.0.0-beta.17

$ tigris create database robert

$ tigris list databases
robert

Preparing For Production

Scaling Search Out​

To ensure Search is also redundant, once the deployment has progressed past transient state, Tigris Search should be scaled up to multiple replicas. In order to maintain quorum, the number of replicas should be set to an odd number, at a minimum of 3.

Below command will increase the number of Tigris Search replicas to 5 which should be a sufficiently large number of replicas for an initial Production deployment:

❯ helm upgrade tigris-stack . --set tigris-search.replicas=5
W1103 18:12:06.790278 82440 gcp.go:119] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.26+; use gcloud instead.
To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
W1103 18:12:14.011524 82440 warnings.go:70] Autopilot increased resource requests for Deployment default/tigris-stack-fdb-operator to meet requirements. See http://g.co/gke/autopilot-resources.
W1103 18:12:14.362641 82440 warnings.go:70] Autopilot increased resource requests for Deployment default/tigris-server to meet requirements. See http://g.co/gke/autopilot-resources.
W1103 18:12:14.711610 82440 warnings.go:70] Autopilot increased resource requests for StatefulSet default/tigris-search to meet requirements. See http://g.co/gke/autopilot-resources.
Release "tigris-stack" has been upgraded. Happy Helming!
NAME: tigris-stack
LAST DEPLOYED: Thu Nov 3 18:12:08 2022
NAMESPACE: default
STATUS: deployed
REVISION: 2
TEST SUITE: None

You can verify that additional replicas were started, using kubectl:

❯ kubectl get pods | grep tigris
W1103 18:12:33.301669 82537 gcp.go:119] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.26+; use gcloud instead.
To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
tigris-search-0 2/2 Running 8 (25m ago) 6h16m
tigris-search-1 0/2 Pending 0 19s
tigris-search-2 0/2 Pending 0 19s
tigris-search-3 0/2 Pending 0 18s
tigris-search-4 0/2 Pending 0 18s
tigris-server-8646cb4b7b-95lcf 1/1 Running 0 6h4m
tigris-server-8646cb4b7b-gff64 1/1 Running 2 (6h5m ago) 6h5m
tigris-server-8646cb4b7b-hmxj9 1/1 Running 5 (6h5m ago) 6h16m
tigris-stack-fdb-operator-8fd845b9-zgr4t 1/1 Running 0 6h12m

The replicas should catch up quickly as there isn't a lot of search index to be synchronized. However, GKE Autopilot might need to scale up the nodes prior:

❯ kubectl describe pod tigris-search-1 | tail
W1103 18:14:04.069915 83269 gcp.go:119] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.26+; use gcloud instead.
To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
Node-Selectors: <none>
Tolerations: kubernetes.io/arch=amd64:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 110s gke.io/optimize-utilization-scheduler 0/24 nodes are available: 24 Insufficient cpu, 24 Insufficient memory.
Normal TriggeredScaleUp 74s cluster-autoscaler pod triggered scale-up: [{https://www.googleapis.com/compute/v1/projects/mystic-berm-360500/zones/us-west2-c/instanceGroups/gk3-doc-nap-2qbw2tfi-b7486e29-grp 0->1 (max: 1000)} {https://www.googleapis.com/compute/v1/projects/mystic-berm-360500/zones/us-west2-a/instanceGroups/gk3-doc-nap-2qbw2tfi-efcf60fb-grp 0->1 (max: 1000)}]
Warning FailedScheduling 23s gke.io/optimize-utilization-scheduler 0/26 nodes are available: 2 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate, 24 Insufficient cpu, 24 Insufficient memory.

It should take only a minute or two to get them up Running:

❯ kubectl get pods | grep tigris-search
W1103 18:15:05.957816 83699 gcp.go:119] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.26+; use gcloud instead.
To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
tigris-search-0 2/2 Running 8 (27m ago) 6h18m
tigris-search-1 2/2 Running 0 2m52s
tigris-search-2 2/2 Running 0 2m52s
tigris-search-3 2/2 Running 0 2m51s
tigris-search-4 2/2 Running 0 2m51s

Ending TLS​

For a Production installation you will want to add a certificate to your load balancer. However, as this step does not have any Tigris specific detail, we are going to skip detailing this step.

Wrapping Up!

I hope above could illustrate how easy it is to deploy Tigris to GKE Autopilot! Feel free to compare it to the article about deploying Tigris to EKS where we discussed the steps necessary to deploy it to AWS!

If you have any suggestions for us on Tigris related subjects that you think people might find interesting, feel free to reach out to us on either our Tigris Community Slack channel or our Tigris Discord server!

Hope you enjoyed reading or watching this blog or vlog! If you did, stay tuned as next we are going to cover a few interesting subjects such as performing logical backups and restores with Tigris!


Tigris is the data platform built for developers! Use it as a scalable, ACID transactional, real-time backend for your serverless applications. Build data-rich features without worrying about slow queries or missing indexes. Seamlessly implement search within your applications with its embedded search engine. Connect serverless functions with its event streams to build highly responsive applications that scale automatically.

Sign up for the beta

Get early access and try out Tigris for your next application. Join our Slack or Discord community to ask any questions you might have.

Β· 3 min read
Ovais Tariq

We are excited to announce that Tigris is now available on the Vercel Integrations Marketplace. If you are already using Vercel to develop and ship data-rich applications or considering it for a new application, this integration enables you to add Tigris, an Open Source Ops-free Serverless alternative to MongoDB Atlas, to your Vercel application within a few minutes.

Vercel and Tigris

Build data-rich applications with ease

Vercel is known for providing a great experience to developers to deploy and scale Next.js applications with ease and in a configuration-free manner. Features such as CI/CD, serverless functions, analytics and content delivery at the edge simplify the development workflow and enable the developers to focus on building applications.

Tigris is the perfect companion to Vercel! Tigris is an open source, ACID-transactional serverless document store which brings a Vercel-like modern Ops-free developer experience for database users.

Scale confidently with a true serverless database

Unlike MongoDB Atlas, it is built to be serverless from the ground-up. Storage, compute and data-indexing are built as separate layers which can be scaled independently. This is how Tigris provides a true serverless experience and is able to scale easily based on the application's needs.

Tigris provides a native HTTP interface that makes it work well with serverless applications where traditional databases suffer from connection-related issues.

The only database made for your development workflow

All the interactions with Tigris happen in code. Your development workflow is:

  1. Define data models
  2. Implement application logic, and
  3. Push the code to production.

All the database changes (creation, modification) get taken care of automatically, with no need to manually execute queries or click buttons.

It doesn't stop there, once your application is in production, Tigris doesn't put the burden of DBA operations on you. It provides automated data indexing which removes the need for DBA operations and means all the queries are always fast - no infra setup or configuration needed.

Finally, unlike MongoDB Atlas, Tigris' local development environment can run the entire platform in a single container, so you can develop locally and be sure that your code behaves the same way in production.

Get started today

If you're ready to start building your next application with Tigris and Vercel, getting started is simple. Select Tigris on the Vercel Integrations Marketplace and automatically create and link Tigris with your Vercel project in just a few clicks. We also have a starter app available for you to get you started quickly.

Vercel Tigris integration page


Tigris is the data platform built for Next.js applications! Use it as a scalable, ACID transactional, real-time backend for your serverless applications. Build rich features with dynamic data without worrying about slow database queries or missing indexes. Seamlessly implement search within your applications with its embedded search engine. Connect serverless functions with its event streams to build highly responsive applications that scale automatically.

Sign up for the beta

Get early access and try out Tigris for your next Next.js application. Join our Slack or Discord community to ask any questions you might have.

Β· 8 min read
Robert Barabas

This blog outlines the deployment of Tigris on an AWS managed Elastic Kubernetes Service (EKS). Future blogs we will walk through other aspects of the product setup for Tigris as a service, such as the setup of authentication. Stay tuned!

The installation will use recommended settings for redundancy, allocating more resources than a simple laptop based installation would. For more information on the laptop based installation please consult our previous blog!

If you would rather watch a video, check out the deployment in action on YouTube:

Requirements

Installation Host​

The following components will need to be installed on the machine you are performing the deployment steps on:

  • Helm
  • AWS Cli

Helm will be used to install the Tigris Stack Chart:

❯ helm version
version.BuildInfo{Version:"v3.10.1", GitCommit:"9f88ccb6aee40b9a0535fcc7efea6055e1ef72c9", GitTreeState:"clean", GoVersion:"go1.19.2"}

AWS Cli will be used to setup a wildcard certificate:

❯ aws --version
aws-cli/2.8.5 Python/3.10.8 Darwin/21.6.0 source/arm64 prompt/off

EKS​

Outside of the above you will need an EKS cluster with access and sufficient resources available for deployment.

❯ kubectl get nodes -L beta.kubernetes.io/instance-type
NAME STATUS ROLES AGE VERSION INSTANCE-TYPE
ip-10-2-10-204.us-west-2.compute.internal Ready <none> 18h v1.21.14-eks-ba74326 r6i.2xlarge
ip-10-2-10-6.us-west-2.compute.internal Ready <none> 6d v1.21.14-eks-ba74326 r6i.2xlarge
ip-10-2-11-166.us-west-2.compute.internal Ready <none> 18h v1.21.14-eks-ba74326 r6i.2xlarge
ip-10-2-11-169.us-west-2.compute.internal Ready <none> 18h v1.21.14-eks-ba74326 r6i.2xlarge
ip-10-2-12-79.us-west-2.compute.internal Ready <none> 6d v1.21.14-eks-ba74326 r6i.2xlarge
ip-10-2-13-192.us-west-2.compute.internal Ready <none> 6d v1.21.14-eks-ba74326 r6i.2xlarge

Most of the resources will be consumed by FoundationDB and TypeSense, both of which are main building blocks to Tigris.

The EKS cluster must have the following components installed:

  • AWS Load Balancer Controller
  • Cert Manager
  • EBS CSI Controller
❯ helm list -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
aws-load-balancer-controller kube-system 1 2022-10-20 11:00:30.800715 -0400 EDT deployed aws-load-balancer-controller-1.4.4 v2.4.3
cert-manager kube-system 1 2022-10-20 11:00:27.772092 -0400 EDT deployed cert-manager-v1.6.1 v1.6.1
cert-manager-ca kube-system 1 2022-10-20 11:01:45.36529 -0400 EDT deployed cert-manager-ca-0.2.0 v0.1.0
external-dns kube-system 1 2022-10-20 11:00:25.898907 -0400 EDT deployed external-dns-6.7.1 0.12.1
metrics-server kube-system 1 2022-10-20 11:00:26.973139 -0400 EDT deployed metrics-server-3.8.1 0.6.1

Deployment

The installation deploys the following components:

  • Kubernetes Operator for FoundationDB
  • FoundationDB
  • Tigris Search (TypeSense)
  • Tigris Server

You can install the components individually or together, using the encompassing tigris-stack Helm Chart. Below I’m going to use this Chart to install Tigris.

Create Certificate for TLS​

First, we need to generate a certificate for TLS. This certificate will be used on the load balancer to terminate TLS connections:

❯ aws acm request-certificate --domain-name='*.example.com'
{
"CertificateArn": "arn:aws:acm:us-west-2:878843336588:certificate/fe257207-b117-4db0-ad6b-eef8d66308cd"
}

Prepare For Deployment​

Next, check out the deploy script repository:

❯ git clone git@github.com:tigrisdata/tigris-deploy.git
Cloning into 'tigris-deploy'...
remote: Enumerating objects: 177, done.
remote: Counting objects: 100% (97/97), done.
remote: Compressing objects: 100% (60/60), done.
remote: Total 177 (delta 43), reused 68 (delta 34), pack-reused 80
Receiving objects: 100% (177/177), 87.68 KiB | 568.00 KiB/s, done.
Resolving deltas: 100% (63/63), done.

Navigate to the folder which contains the helm chart of tigris-stack:

❯ cd tigris-deploy/helm/tigris-stack

Deploy Tigris Stack​

As we are deploying to EKS, we are going to use an ALB as our load balancer. The host for this installation will be set to api.example.com.

⚠️ You will want to make sure that above hostname matches the domain of the wildcard certificate you created previously!

Finally, to ensure there is initial quorum for Tigris Search, we deploy it initially with a single replica:

❯ helm install tigris-stack . -f ./values.yaml --set tigris-server.ingress_aws.enabled=true --set tigris-server.tls_hostname="api.example.com" --set tigris-search.replicas=1

NAME: tigris-stack
LAST DEPLOYED: Tue Oct 25 18:58:53 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

That’s it, your Tigris deployment should be now on its way coming up!

Validate Deployment​

Generally speaking, there are two high level checkboxes to check. First, we should ensure that all the resources were scheduled and are available and running. Second, we will want to make sure that Tigris API is accessible using the Ingress resource allocated. These steps are expanded upon below.

Resources Validation​

Allow the resources such as FoundationDB a couple minutes to initialize. In a Production-ready installation FoundationDB would allocate significant resources. You will want to make sure that the FDB Pods were able to be scheduled.

Tigris will not enter the Running state until FoundationDB becomes fully functional. It might take a couple minutes for FoundationDB to become unavailable, even when all of its Pods appear to be in the Running state.

You can validate if the FDB key-value store is available using fdbcli:

tigris@tigris-server-58ccd7bb9f-czcbb:/server$ fdbcli -C /mnt/fdb-config-volume/cluster-file
Using cluster file `/mnt/fdb-config-volume/cluster-file'.

The database is available.

Welcome to the fdbcli. For help, type `help'.
fdb>

Look for the message β€œThe database is available” in fdbcli’s output.

Final Overview

Unless there were additional customizations, your output should be similar to below:

❯ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/fdb-cluster-log-1 2/2 Running 0 5m53s
pod/fdb-cluster-log-2 2/2 Running 0 5m53s
pod/fdb-cluster-log-3 2/2 Running 0 5m53s
pod/fdb-cluster-log-4 2/2 Running 0 5m53s
pod/fdb-cluster-log-5 2/2 Running 0 5m53s
pod/fdb-cluster-stateless-1 2/2 Running 0 5m54s
pod/fdb-cluster-stateless-10 2/2 Running 0 5m53s
pod/fdb-cluster-stateless-2 2/2 Running 0 5m54s
pod/fdb-cluster-stateless-3 2/2 Running 0 5m54s
pod/fdb-cluster-stateless-4 2/2 Running 0 5m54s
pod/fdb-cluster-stateless-5 2/2 Running 0 5m54s
pod/fdb-cluster-stateless-6 2/2 Running 0 5m54s
pod/fdb-cluster-stateless-7 2/2 Running 0 5m54s
pod/fdb-cluster-stateless-8 2/2 Running 0 5m54s
pod/fdb-cluster-stateless-9 2/2 Running 0 5m53s
pod/fdb-cluster-storage-1 2/2 Running 0 5m54s
pod/fdb-cluster-storage-2 2/2 Running 0 5m54s
pod/fdb-cluster-storage-3 2/2 Running 0 5m54s
pod/fdb-cluster-storage-4 2/2 Running 0 5m54s
pod/fdb-cluster-storage-5 2/2 Running 0 5m54s
pod/tigris-search-0 2/2 Running 1 6m49s
pod/tigris-server-58ccd7bb9f-czcbb 1/1 Running 4 6m49s
pod/tigris-server-58ccd7bb9f-ngjk5 1/1 Running 4 6m49s
pod/tigris-server-58ccd7bb9f-rnbxb 1/1 Running 4 6m49s
pod/tigris-stack-fdb-operator-5d9dbc4c9d-ptlng 1/1 Running 0 6m49s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 5d8h
service/tigris-grpc NodePort 172.20.60.127 <none> 80:30440/TCP 6m50s
service/tigris-headless ClusterIP None <none> 8080/TCP 6m50s
service/tigris-http NodePort 172.20.82.191 <none> 80:30675/TCP 6m50s
service/tigris-search NodePort 172.20.130.194 <none> 80:31720/TCP 6m50s
service/ts ClusterIP None <none> 8108/TCP 6m50s

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/tigris-server 3/3 3 3 6m50s
deployment.apps/tigris-stack-fdb-operator 1/1 1 1 6m50s

NAME DESIRED CURRENT READY AGE
replicaset.apps/tigris-server-58ccd7bb9f 3 3 3 6m50s
replicaset.apps/tigris-stack-fdb-operator-5d9dbc4c9d 1 1 1 6m50s

NAME READY AGE
statefulset.apps/tigris-search 1/1 6m50s

Ingress Validation​

The most EKS installation specific piece to a Tigris installation is generally the load balancer and related resources.

The installation will create an annotated Ingress resource:

❯ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
tigris-server <none> * 80 3m25s
❯ kubectl get ingress tigris-server -o yaml | grep -A17 annotations:
annotations:
alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/conditions.tigris-grpc: |
[{"field":"http-header","httpHeaderConfig":{"httpHeaderName": "Content-Type", "values":["application/grpc"]}}]
alb.ingress.kubernetes.io/group.name: tigris-server-lb
alb.ingress.kubernetes.io/healthcheck-path: /v1/health
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
alb.ingress.kubernetes.io/load-balancer-attributes: routing.http.drop_invalid_header_fields.enabled=true
alb.ingress.kubernetes.io/load-balancer-name: tigris-server-lb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-2-2017-01
alb.ingress.kubernetes.io/tags: service=tigris-server
alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true,stickiness.app_cookie.duration_seconds=10,stickiness.type=app_cookie,stickiness.app_cookie.cookie_name=Tigris-Tx-Id
alb.ingress.kubernetes.io/target-type: ip
external-dns.alpha.kubernetes.io/ingress-hostname-source: annotation-only
kubernetes.io/ingress.class: alb
meta.helm.sh/release-name: tigris-stack
meta.helm.sh/release-namespace: default

AWS Load Balancer Controller will create an ALB based on above annotation:

❯ aws elbv2 describe-load-balancers --names tigris-server-lb | grep -i dnsname
"DNSName": "tigris-server-lb-<redacted>.us-west-2.elb.amazonaws.com",

Make sure that your load balancer is healthy and operational before you proceed!

Preparing For Production

There is one last step that is required for a proper, production-ready installation.

In order to ensure there is proper redundancy under Tigris Search, you will want to increase the number of replicas to an odd number of replicas post initial installation.

An odd number of replicas is required to ensure that quorum can be reached. An even number of replicas could end up being without a tie breaker during network partitioning, where both partitions may end up with the same number of replicas.

Below command will increase the number of Tigris Search replicas to 5:

❯ helm upgrade tigris-stack . -f ./values.yaml --set tigris-server.ingress_aws.enabled=true --set tigris-server.tls_hostname="api.example.com" --set tigris-search.replicas=5
Release "tigris-stack" has been upgraded. Happy Helming!
NAME: tigris-stack
LAST DEPLOYED: Tue Oct 25 19:13:33 2022
NAMESPACE: default
STATUS: deployed
REVISION: 2
TEST SUITE: None

We recommend using at least 3 replicas for Production installations with 5 being recommended for load handling purposes. This is the default setting in the tigris-stack Chart that is only reduced during install, to ensure quorum can be achieved during initialization.

You can verify that there are now 5 replicas running with a simple kubectl command:

❯ kubectl get pods | grep tigris
tigris-search-0 2/2 Running 1 17m
tigris-search-1 2/2 Running 0 2m39s
tigris-search-2 2/2 Running 0 2m39s
tigris-search-3 2/2 Running 0 2m39s
tigris-search-4 2/2 Running 0 2m39s
tigris-server-58ccd7bb9f-czcbb 1/1 Running 4 17m
tigris-server-58ccd7bb9f-ngjk5 1/1 Running 4 17m
tigris-server-58ccd7bb9f-rnbxb 1/1 Running 4 17m
tigris-stack-fdb-operator-5d9dbc4c9d-ptlng 1/1 Running 0 17m

Final Thoughts​

In this blog we have covered the deployment of Tigris to one of the most common Managed Kubernetes platforms available in the Cloud. As you can see from above the deployment process is fairly easy and straightforward on EKS.

If you liked our blog make sure to follow us as next week we are going to cover the deployment of Tigris on Google Cloud Platform!

Enjoy using Tigris as much as we enjoy building it!!!


Tigris is the data platform built for developers! Use it as a scalable, ACID transactional, real-time backend for your serverless applications. Build data-rich features without worrying about slow queries or missing indexes. Seamlessly implement search within your applications with its embedded search engine. Connect serverless functions with its event streams to build highly responsive applications that scale automatically.

Sign up for the beta

Get early access and try out Tigris for your next application. Join our Slack or Discord community to ask any questions you might have.

Β· 9 min read
Adil Ansari
Taha Khan

Next.js gives you the best developer experience with all the features you need to build modern, fast production-ready applications. Tigris is the perfect companion for Next.js as it is similarly built with developer experience in mind and is truly serverless: build data-rich features, seamlessly implement search, and easily use it with serverless functions, all without needing to do Ops.

Next.js and Tigris

Now with the introduction out of the way, it is time to demonstrate how.

This is the first of a series of blog posts where we will demonstrate how easy it is to build Next.js apps with Tigris. We will build a to-do list app and deploy it to Vercel. The to-do list app will have the following features:

  • add to-do items to the list
  • update to-do items as completed
  • delete to-do items
  • search for to-do items in the list

To follow along with this tutorial you can get the code from the GitHub repo. This is how the to-do app will look once its deployed: https://tigris-nextjs-starter-kit.vercel.app/

Prerequisites​

For this tutorial you'll need:

  1. GitHub account (sign up for free)
  2. Tigris Cloud account (sign up for free)
  3. Vercel account (sign up for free) to deploy app
  4. Node.js 16+
  5. npm and npx

Deploying the to-do list app to Vercel​

We will start off first by deploying the pre-prepared to-do list app to Vercel from the GitHub repo. Then once it is deployed and running, we will explore the code in detail.

Create a project on Vercel​

Vercel makes it easier to deploy Git projects with a few clicks.

Hit the following Deploy button to get started with the Vercel workflow to clone the repo to your account

Deploy with Vercel

This should take you to Vercel to the "Create Git Repository" step

Vercel create repo

Add Tigris integration​

Pick a name for your new Git repo and then you'll configure the Tigris integration that will setup the environment variables needed to connect to Tigris: TIGRIS_URI, TIGRIS_CLIENT_ID, and TIGRIS_CLIENT_SECRET.

Vercel environment

Hit the Add button and it will take you to the Tigris integration page where in few simple steps you will be able to configure the integration.

Vercel integrate Tigris

Hit the Continue button and that's it!

Once the deployment completes, continue to your project dashboard on Vercel where you'll find URL for your to-do list app

Vercel project dashboard

πŸŽ‰ All done. Visit the URL in browser to access your to-do list app and play around. πŸŽ‰

Now let's continue to explore the code for the to-do list app to see how easily Tigris can be integrated with Next.js.

Code walk-through​

This section will elaborate on important aspects of the to-do list app you just deployed. Let's glance over the important components of the project.

File structure
|-- package.json
|-- lib
|-- tigris.ts
|-- models
|-- tigris
|-- todoStarterApp
|-- todoItems.ts
|-- pages
|-- index.tsx
|-- api
|-- item
|-- [id].ts
|-- items
|-- index.ts
|-- search.ts

Tigris data models and schemas - models/tigris​

With Tigris it all starts with the data model! Tigris stores data records as documents. Documents are analogous to JSON objects but Tigris stores them in an optimized binary format. Documents are grouped together in collections.

The to-do list app has a single collection todoItems that stores the to-do items. The first thing you would do is define the schema.

Tigris follows the convention of having the models and schemas stored in the models/tigris directory. Within this directory we have the todoStarterApp directory which is our database name and the file todoItems.ts stores the schema for the collection named todoItems:

models/tigris/todoItems.ts
import {
TigrisCollectionType,
TigrisDataTypes,
TigrisSchema,
} from "@tigrisdata/core/dist/types";

export const COLLECTION_NAME = "todoItems";

export interface TodoItem extends TigrisCollectionType {
id?: number;
text: string;
completed: boolean;
}

export const TodoItemSchema: TigrisSchema<TodoItem> = {
id: {
type: TigrisDataTypes.INT32,
primary_key: { order: 1, autoGenerate: true },
},
text: { type: TigrisDataTypes.STRING },
completed: { type: TigrisDataTypes.BOOLEAN },
};

Connecting to Tigris - lib/tigris.ts​

This file loads the environment variables that were populated by the Tigris integration and configured the Tigris client. This client is used to manage all the Tigris operations from here on. Also, note how we are caching the client instance so that it can be used for subsequent requests.

lib/tigris.ts
import { DB, Tigris, TigrisClientConfig } from "@tigrisdata/core";

const DB_NAME = "todoStarterApp";

if (!process.env.TIGRIS_URI) {
throw new Error("Cannot find TIGRIS_URI environment variable ");
}

const tigrisUri = process.env.TIGRIS_URI;
const clientConfig: TigrisClientConfig = { serverUrl: tigrisUri };

if (process.env.TIGRIS_CLIENT_ID) {
clientConfig.clientId = process.env.TIGRIS_CLIENT_ID;
}
if (process.env.TIGRIS_CLIENT_SECRET) {
clientConfig.clientSecret = process.env.TIGRIS_CLIENT_SECRET;
}

declare global {
// eslint-disable-next-line no-var
var tigrisDb: DB;
}

let tigrisDb: DB;

if (process.env.NODE_ENV === "development") {
// re-use the same connection in dev
if (!global.tigrisDb) {
const tigrisClient = new Tigris(clientConfig);
global.tigrisDb = tigrisClient.getDatabase(DB_NAME);
}
tigrisDb = global.tigrisDb;
} else {
const tigrisClient = new Tigris(clientConfig);
tigrisDb = tigrisClient.getDatabase(DB_NAME);
}

// export to share DB across modules
export default tigrisDb;

Creating the database and collection - scripts/setup.ts​

The file scripts/setup.ts automatically sets up the database and collection at build time. It looks for the models in the directory models/tigris and creates the databases and collections in an idempotent way instantaneously.

Pages​

Let's take a look at fetchListItems() in this React component that loads and renders the to-do list items.

// Fetch Todo List
const fetchListItems = () => {
setIsLoading(true);
setIsError(false);

fetch("/api/items")
.then((response) => response.json())
.then((data) => {
setIsLoading(false);
if (data.result) {
setViewMode("list");
setTodoList(data.result);
} else {
setIsError(true);
}
})
.catch(() => {
setIsLoading(false);
setIsError(true);
});
};

Evidently this React component is only rendering the items returned by /api/items.

Similarly, the addTodoItem(), to add a to-do list item, simply makes a POST request to /api/items.

// Add a new to-do item
const addToDoItem = () => {
if (queryCheckWiggle()) {
return;
}
setIsLoading(true);

fetch("/api/items", {
method: "POST",
body: JSON.stringify({ text: textInput, completed: false }),
}).then(() => {
setIsLoading(false);
setTextInput("");
fetchListItems();
});
};

We will now dive into the API routes to see how these are integrated with Tigris that is powering our application.

Tigris and Serverless Functions

All the API routes are deployed as Serverless Functions. Tigris is serverless itself and natively supports HTTP. This makes it a perfect fit for Serverless Functions.

API routes to find and add items​

All the Next.js API routes are defined under /pages/api/. We have three files: /pages/api/items/index.ts, /pages/api/items/search.ts and /pages/api/item/[id].ts that expose following endpoints:

  • GET /api/items to get an array of to-do items as Array<TodoItem>
  • POST /api/items to add an item to the list
  • GET /api/items/search?q=query to find and return items matching the given query
  • GET /api/item/{id} to fetch an item
  • PUT /api/item/{id} to update the given item
  • DELETE /api/item/[id] to delete an item

Let's look at the /api/items api that supports both GET and POST handlers.

pages/api/items/index.ts
import type { NextApiRequest, NextApiResponse } from 'next'
import { COLLECTION_NAME, TodoItem } from '../../../lib/schema'
import tigrisDb from '../../../lib/tigris'

type Response = {
result?: Array<TodoItem>,
error?: string
}

// GET /api/items -- gets items from collection
// POST /api/items {ToDoItem} -- inserts a new item to collection
export default async function handler (
req: NextApiRequest,
res: NextApiResponse<Response>
) {
switch (req.method) {
case 'GET':
await handleGet(req, res)
break
case 'POST':
await handlePost(req, res)
break
default:
res.setHeader('Allow', ['GET', 'POST'])
res.status(405).end(`Method ${req.method} Not Allowed`)
}
}

...

handleGet() method is fetching and returning items from Tigris collection, let's take a look at its implementation. You will see how easy it is to fetch data from Tigris.

pages/api/items/index.ts
async function handleGet(req: NextApiRequest, res: NextApiResponse<Response>) {
try {
const itemsCollection = tigrisDb.getCollection<TodoItem>(COLLECTION_NAME);
const cursor = itemsCollection.findMany();
const items = await cursor.toArray();
res.status(200).json({ result: items });
} catch (err) {
const error = err as Error;
res.status(500).json({ error: error.message });
}
}

The itemsCollection.findMany() function sends a query to Tigris and returns a cursor to fetch results from collection.

Let's look at handlePost() implementation that inserts a TodoItem in collection by using the insertOne() function.

pages/api/items/index.ts
async function handlePost(req: NextApiRequest, res: NextApiResponse<Response>) {
try {
const item = JSON.parse(req.body) as TodoItem;
const itemsCollection = tigrisDb.getCollection<TodoItem>(COLLECTION_NAME);
const inserted = await itemsCollection.insertOne(item);
res.status(200).json({ result: [inserted] });
} catch (err) {
const error = err as Error;
res.status(500).json({ error: error.message });
}
}

API route to search items​

Tigris makes it really easy to implement search within your applications by providing an embedded search engine that makes all your data instantly searchable.

Let's take a look at the search handler to see how easy it is to add powerful real-time search functionality. The itemsCollection.search functions sends a search query to Tigris and fetches the documents that match the query.

Tigris real-time search

Note how you did not have to setup Elasticsearch, or configure search indexes. It was all taken care for you automatically.

pages/api/items/search.ts
export default async function handler(
req: NextApiRequest,
res: NextApiResponse<Data>
) {
const query = req.query["q"];
if (query === undefined) {
res.status(400).json({ error: "No search query found in request" });
return;
}
try {
const itemsCollection = tigrisDb.getCollection<TodoItem>(COLLECTION_NAME);
const searchRequest: SearchRequest<TodoItem> = { q: query as string };
const searchResult = await itemsCollection.search(searchRequest);
const items = new Array<TodoItem>();
for (const hit of searchResult.hits) {
items.push(hit.document);
}
res.status(200).json({ result: items });
} catch (err) {
const error = err as Error;
res.status(500).json({ error: error.message });
}
}

Summary​

In this tutorial, you deployed a to-do list Next.js app that uses Tigris as the backend. You saw all the powerful functionality that Tigris provides, and how easy it is to use it within Serverless Functions.

Tigris is the easiest way to work with data in your Next.js applications. Tigris and Next.js provide developers with the fastest way to build fast, data-rich and highly-responsive applications.

You can find the complete source code for this tutorial GitHub repo, feel free to raise issues or contribute to the project.

Happy learning!


Tigris is the data platform built for Next.js applications! Use it as a scalable, ACID transactional, real-time backend for your serverless applications. Build rich features with dynamic data without worrying about slow database queries or missing indexes. Seamlessly implement search within your applications with its embedded search engine. Connect serverless functions with its event streams to build highly responsive applications that scale automatically.

Sign up for the beta

Get early access and try out Tigris for your next Next.js application. Join our Slack or Discord community to ask any questions you might have.

Β· 8 min read
Peter Boros
Robert Barabas

Tigris is an open source developer data platform that makes building data-rich applications a breeze. This is the first of a series of blog posts where we show you how to deploy Tigris in various environments. In the first post of the series we will show you how to set up the Tigris platform on your laptop. In our next posts we will cover deploying Tigris on EKS and GKE.

We will use tigris-deploy, which is a set of helm charts and a wrapper script. Using it, we will deploy the complete Tigris stack including FoundationDB, TypeSense, Grafana, VictoriaMetrics, Tigris itself and an nginx-based load balancer. For this example, we will use minikube but you may use k3s or Kind, if you prefer.

If you don't want to use the provided metrics store and grafana, you can turn those off easily via values yaml. Tigris itself is providing prometheus compatible metrics on the /metrics url.

Watch the video below to see the deployment in action

Clone tigris-deploy repository​

The quickest way to deploy Tigris is using its Helm Chart. To perform a Helm-based installation, you need to clone the tigris-deploy repository. This has several helm charts in it. One of them is tigris-stack, a β€œchart of charts” that installs all the dependencies, so installing that will result in a fully working Tigris platform.

git clone git@github.com:tigrisdata/tigris-deploy.git
cd tigris-deploy

Start minikube​

In this example, we will use minikube to have a kubernetes cluster. The goal of this step is to have a working kubernetes cluster and a kubernetes client that is configured to use that cluster.

minikube start --kubernetes-version=1.21.14
Output
πŸ˜„  minikube v1.27.1 on Darwin 12.6
✨ Automatically selected the docker driver
πŸ“Œ Using Docker Desktop driver with root privileges
πŸ‘ Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
πŸ”₯ Creating docker container (CPUs=2, Memory=16300MB) ...
🐳 Preparing Kubernetes v1.21.14 on Docker 20.10.18 ...
β–ͺ Generating certificates and keys ...
β–ͺ Booting up control plane ...
β–ͺ Configuring RBAC rules ...
πŸ”Ž Verifying Kubernetes components...
β–ͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
πŸ„ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

After minikube is started, no pods will run in the default namespace.

kubectl get pods
Output
No resources found in default namespace.

But you should see pods in the kube-system namespace.

kubectl get pods -A
Output
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system coredns-558bd4d5db-7x8hv 1/1 Running 0 82s
kube-system etcd-minikube 1/1 Running 0 94s
kube-system kube-apiserver-minikube 1/1 Running 0 94s
kube-system kube-controller-manager-minikube 1/1 Running 0 94s
kube-system kube-proxy-69xh5 1/1 Running 0 82s
kube-system kube-scheduler-minikube 1/1 Running 0 94s
kube-system storage-provisioner 1/1 Running 0 92s

At this point, we have minikube working, ready to deploy tigris in it.

Deploy Tigris​

For the sake of simplicity, we provide deploy.sh, a wrapper script. This script extends the Chart's functionality and addresses common tasks such as the setup of dependencies for the Chart.

We will deploy Tigris with no redundancy to minimize resource consumption and make Tigris fit on a reasonably equipped laptop.

For Production, you will want to enable full redundancy. Without overrides, the Charts will deploy resources with redundancy enabled.

bash deploy.sh
Output
Getting updates for unmanaged Helm repositories...
...Successfully got an update from the "https://kubernetes.github.io/ingress-nginx" chart repository
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "grafana" chart repository
...Successfully got an update from the "vm" chart repository
Update Complete. ⎈Happy Helming!⎈
Saving 7 charts
Downloading ingress-nginx from repo https://kubernetes.github.io/ingress-nginx
Downloading victoria-metrics-single from repo https://victoriametrics.github.io/helm-charts
Downloading grafana from repo https://grafana.github.io/helm-charts
Deleting outdated charts
Release "tigris-stack" does not exist. Installing it now.
W1014 17:51:00.854310 48946 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W1014 17:51:00.856463 48946 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W1014 17:51:00.858829 48946 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W1014 17:51:09.081654 48946 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W1014 17:51:09.081939 48946 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W1014 17:51:09.082027 48946 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
NAME: tigris-stack
LAST DEPLOYED: Fri Oct 14 17:50:59 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1

The platform should become available 4-5 minutes after the deploy script is run. This can be verified with kubectl get all. Once It’s ready, you will see this output.

kubectl get all
Output
NAME                                                         READY   STATUS    RESTARTS   AGE
pod/fdb-cluster-log-1 2/2 Running 0 2m18s
pod/fdb-cluster-stateless-1 2/2 Running 0 2m18s
pod/fdb-cluster-storage-1 2/2 Running 0 2m18s
pod/tigris-search-0 2/2 Running 1 4m45s
pod/tigris-server-79f77c8cb7-cxwxk 1/1 Running 0 4m45s
pod/tigris-stack-fdb-operator-5d9dbc4c9d-6f6b2 1/1 Running 0 4m45s
pod/tigris-stack-grafana-7586c54dc-24fvn 1/1 Running 0 4m45s
pod/tigris-stack-ingress-nginx-controller-57c4689667-th296 1/1 Running 0 4m45s
pod/tigris-stack-victoria-metrics-single-server-0 1/1 Running 0 4m45s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9m7s
service/tigris-grpc NodePort 10.109.166.16 <none> 80:31740/TCP 4m45s
service/tigris-headless ClusterIP None <none> 8080/TCP 4m45s
service/tigris-http NodePort 10.100.38.18 <none> 80:31945/TCP 4m45s
service/tigris-search NodePort 10.102.197.16 <none> 80:32466/TCP 4m45s
service/tigris-stack-grafana ClusterIP 10.111.119.96 <none> 80/TCP 4m45s
service/tigris-stack-ingress-nginx-controller LoadBalancer 10.102.175.141 <pending> 80:31965/TCP,443:31050/TCP 4m45s
service/tigris-stack-ingress-nginx-controller-admission ClusterIP 10.100.243.2 <none> 443/TCP 4m45s
service/tigris-stack-victoria-metrics-single-server ClusterIP None <none> 8428/TCP 4m45s
service/ts ClusterIP None <none> 8108/TCP 4m45s

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/tigris-server 1/1 1 1 4m45s
deployment.apps/tigris-stack-fdb-operator 1/1 1 1 4m45s
deployment.apps/tigris-stack-grafana 1/1 1 1 4m45s
deployment.apps/tigris-stack-ingress-nginx-controller 1/1 1 1 4m45s

NAME DESIRED CURRENT READY AGE
replicaset.apps/tigris-server-79f77c8cb7 1 1 1 4m45s
replicaset.apps/tigris-stack-fdb-operator-5d9dbc4c9d 1 1 1 4m45s
replicaset.apps/tigris-stack-grafana-7586c54dc 1 1 1 4m45s
replicaset.apps/tigris-stack-ingress-nginx-controller-57c4689667 1 1 1 4m45s

NAME READY AGE
statefulset.apps/tigris-search 1/1 4m45s
statefulset.apps/tigris-stack-victoria-metrics-single-server 1/1 4m45s

Here is an example output when the service is not ready yet.

Output
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/tigris-search-0 0/2 ContainerCreating 0 40s
pod/tigris-server-79f77c8cb7-cxwxk 0/1 ContainerCreating 0 40s
pod/tigris-stack-fdb-operator-5d9dbc4c9d-6f6b2 0/1 Init:0/3 0 40s
pod/tigris-stack-grafana-7586c54dc-24fvn 0/1 Init:0/1 0 40s
pod/tigris-stack-ingress-nginx-controller-57c4689667-th296 1/1 Running 0 40s
pod/tigris-stack-victoria-metrics-single-server-0 0/1 Running 0 40s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5m2s
service/tigris-grpc NodePort 10.109.166.16 <none> 80:31740/TCP 40s
service/tigris-headless ClusterIP None <none> 8080/TCP 40s
service/tigris-http NodePort 10.100.38.18 <none> 80:31945/TCP 40s
service/tigris-search NodePort 10.102.197.16 <none> 80:32466/TCP 40s
service/tigris-stack-grafana ClusterIP 10.111.119.96 <none> 80/TCP 40s
service/tigris-stack-ingress-nginx-controller LoadBalancer 10.102.175.141 <pending> 80:31965/TCP,443:31050/TCP 40s
service/tigris-stack-ingress-nginx-controller-admission ClusterIP 10.100.243.2 <none> 443/TCP 40s
service/tigris-stack-victoria-metrics-single-server ClusterIP None <none> 8428/TCP 40s
service/ts ClusterIP None <none> 8108/TCP 40s

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/tigris-server 0/1 1 0 40s
deployment.apps/tigris-stack-fdb-operator 0/1 1 0 40s
deployment.apps/tigris-stack-grafana 0/1 1 0 40s
deployment.apps/tigris-stack-ingress-nginx-controller 1/1 1 1 40s

NAME DESIRED CURRENT READY AGE
replicaset.apps/tigris-server-79f77c8cb7 1 1 0 40s
replicaset.apps/tigris-stack-fdb-operator-5d9dbc4c9d 1 1 0 40s
replicaset.apps/tigris-stack-grafana-7586c54dc 1 1 0 40s
replicaset.apps/tigris-stack-ingress-nginx-controller-57c4689667 1 1 1 40s

NAME READY AGE
statefulset.apps/tigris-search 0/1 40s
statefulset.apps/tigris-stack-victoria-metrics-single-server 0/1 40s

From the services part of the kubectl get all output, it’s visible that the tigris-http and the tigris-grpc services are available in kubernetes. Tigris provides the same API with HTTP and GRPC. We will use the GRPC with the tigris command line client to try it out.

Testing Tigris​

We will use minikube to make these services available on our computer.

minikube service tigris-grpc --url
Output
http://127.0.0.1:52999
❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

This exposes tigris service on port 52999. We can use the tigris command line client to interact with the platform running in kubernetes. Apart from this, now everybody knows I was writing this blog post using macOS :)

export TIGRIS_URL="127.0.0.1:52999"
tigris list databases
tigris create database foobar

Now when we should be able to see the database we just created

tigris list databases
Output
foobar

With our small interaction there, we listed the available databases twice and created one. Now let’s check what metrics we have about these interactions. To do that, we need to interact with the grafana service that was deployed by tigris-deploy. First, we need to fetch grafana’s admin password.

kubectl get secret tigris-stack-grafana -o yaml | awk '/admin-password/ {print $2}' | base64 --decode ; echo

The actual output of this command will be the password that can be used to access grafana. Now we can access the grafana service with minikube service just like before.

minikube service tigris-stack-grafana --url
Output
😿  service default/tigris-stack-grafana has no node port
http://127.0.0.1:53122
❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

After logging in with the admin password, click on explore to access the metrics.

Grafana explore button

The VictoriaMetrics instance stores the metrics and is pre-configured as a data source. Switch the query input field to code from the builder.

Grafana explore screen

Once this is done, check the requests_count_ok metrics.

Grafana explore metric

By checking requests_count_ok, we can view the metrics about the 3 requests we just issued. For further information, check out the documentation on metrics, as well as the documentation on deployment.


Tigris is the data platform built for developers! Use it as a scalable, ACID transactional, real-time backend for your serverless applications. Build data-rich features without worrying about slow queries or missing indexes. Seamlessly implement search within your applications with its embedded search engine. Connect serverless functions with its event streams to build highly responsive applications that scale automatically.

Sign up for the beta

Get early access and try out Tigris for your next application. Join our Slack or Discord community to ask any questions you might have.

Β· 6 min read

Google first introduced a real-time search function to its search engine over a decade ago (2010). Today, it is virtually impossible to imagine Google without it. Yet, strangely, real-time and live searches aren't as widespread as they should be. The only real players with comprehensive real-time search capabilities are social media platforms and search engines. App and web developers often avoid adding this functionality because it is either too difficult or they feel it's unnecessary. However, it can be highly useful for e-commerce apps and websites that consistently post fresh content (such as blogs). The last decade saw a shift in the software industry's priorities as greater emphasis has been placed on user experience (UX) and UI design. Knowing how and when to implement real-time search is a crucial skill to have as a modern developer. The following guide will explore what it is and where and when to use it.

Most website and app full-text search functions use indexes that are routinely updated by crawlers. However, this approach is unsuitable for social media sites like Twitter, where new content is published every second. There are at least 350,000 Tweets posted every second. Twitter manages popular and relevant content using (hashtag) trends and its real-time search feature.

Modern search engines connect you to websites that are continuously updating, so a real-time search function is a valuable feature. Initially, Google offered this function through its dedicated Realtime search website. However, the website was decommissioned in 2016, and many of its features were repurposed and refined for Google Trends . On the other hand, Bing uses real-time search functionality for its vertical searches (news and tiles) and keyword research tool to help identify trends.

Recently, there has been some discourse and debate on what real-time search is. While some believe that defining real-time search as a feature that finds content (real-time content) as it is published is sufficient, others believe that true real-time search finds content as it's being created, written, or updated.

Real-time or live search retrieves the latest relevant content. While this is possible with shared code repositories and SaaS products that allow multiple parties to collaborate on content creation (such as Google Docs), it's harder to achieve at large scales. Nevertheless, this distinction is ultimately inconsequential.

Real-Time Search vs. Autosuggest​

Many users confuse autocomplete or autosuggest with real-time search. While they're completely different, these two features can share a relationship. Autocomplete refers to a text box (typically a search box) that completes search phrases for you using an internal index. Autosuggest uses past searches and search trends to retrieve suggestions to help you complete your search phrase or select related content.

If autosuggest or autocomplete uses trends to populate its suggestions, it may use the real-time search functionality in the background. Likewise, autocomplete and autosuggest are often used in real-time search boxes.

Besides search engines and social media platforms, there are many scenarios where real-time search can be helpful, and the following are just a few examples:

In E-commerce​

Amazon is the king of e-commerce and will likely remain so for years. It operates in thirteen countries and has over nine million sellers worldwide. Amazon has an estimated twelve million products in its inventory. Amazon Marketplace has at least 350 million products on sale. Its inventory and stock are constantly changing.

With product and stock counts constantly in flux, live-search functionality is a strong advantage for e-commerce websites like Amazon.com. It could bring back results based on the latest promotions, the amount of stock left, written reviews, etc.

Amazon uses real-time search for its Amazon Live feature. Your e-commerce website could benefit from real-time search in the same way. It can be used to return the most popular items based on which ones are purchased, reviewed, or wish-listed the most. Real-time search functionality can be particularly useful for holiday shopping seasons - especially Black Friday and Cyber Monday, where stock availability can change abruptly. Real-time search can ensure that your site visitors get the most relevant results for product queries. This can translate to quicker conversions and boosted sales. Furthermore, it can minimize situations where sold-out stock is erroneously displayed as in stock because of a caching, late update, or UI issue.

Google Maps is famous for being one of the first mapping mobile apps to use real-time data to track traffic and determine the best routes for users to take. Location-based search functions on the same principle and can be used in autocomplete or autosuggestions to facilitate what is known as Geosearch. It can determine trends based on your location and then populate its full-text search suggestions with geographically relevant recommendations.

Uber Technologies, chiefly famous for its revolutionary ride-hailing service, uses real-time features in many of its services. The most obvious example is the Uber app - Request a Ride feature, which displays a map with all the available cars in your area in real-time. This real-time data determines which ride is best for your trip.

Recently, we've seen real-time location-based searches used in epidemiology to track and contain the spread of the Covid-19 virus. Mapping apps were created to inform people of virus hotspots and vaccination sites.

Alternatively, users can initiate these searches and access breaking news or relevant information about their surroundings. Location-based searches can provide real-time actionable insights if there is a terrorist threat, fire, demonstration, etc., nearby.

Of course, real-time search can also be used for recreation and leisure activities. Users can find the nearest shopping center or a restaurant and potentially the number of people there, allowing them to determine if there is a wait or not.

Almost all modern GUI applications have some form of search or find functionality. Search is a crucial software feature and should be seen as mandatory. Thus, the conversation isn't about whether you have search functionality; it's about how advanced and responsive it is.

But how should you go about integrating real-time search? Do you even have the infrastructure or budget to build a fully custom real-time search feature from scratch? This is where a service like Tigris comes in.

Real-Time Search with Tigris​

One of the biggest obstacles that software developers face when building highly complex real-time search engines and functions is matching the right tools with their data infrastructure. Often, they'll find themselves trying to create processes that efficiently collect and compile information from a range of disparate data sources. Tigris helps developers forgo this practice and the headaches that come with it.

Tigris is a data platform built for developers. Tigris provides an embedded full-text search engine that gives developers a seamless and scalable experience for building rich search experiences in their applications. They can search across all the data stores automatically using full text or faceted search. And the embedded search engine eliminates the need to run a separate search system alongside your database.


Tigris is the data platform built for developers! Use it as a scalable, ACID transactional, real-time backend for your serverless applications. Build data-rich features without worrying about slow queries or missing indexes. Seamlessly implement search within your applications with its embedded search engine. Connect serverless functions with its event streams to build highly responsive applications that scale automatically.

Sign up for the beta

Get early access and try out Tigris for your next application. Join our Slack or Discord community to ask any questions you might have.

Β· 11 min read
Himank Chaudhary

Tigris is an open source developer data platform that makes building data-rich serverless applications a breeze. It enables developers to stick to just being developers and not be forced into DevOps.

Tigris uses FoundationDB's transactional key-value interface as its underlying storage engine. In our blog post Skipping the boring parts of building a database using FoundationDB we went into the details of why we chose to build on FoundationDB. To recap, FoundationDB is an ordered, transactional, key-value store with native support for multi-key strictly serializable transactions across its entire keyspace. We leverage FoundationDB to handle the hard problems of durability, replication, sharding, transaction isolation, and load balancing so we can focus on higher-level concerns.

We are starting a series of blog posts that go into the details of how Tigris has been implemented. In the first post of the series, we will share the details of how we have built the multi-model document layer on top of FoundationDB. We will cover the topics of data layout, and schema management.

How we architected Tigris

Data layout​

To understand the data layout, the first step is to talk about how data is modeled in Tigris.

Data modeling​

Tigris stores data records as documents. Documents are analogous to JSON objects but Tigris stores them in an optimized binary format. Documents are grouped together in collections. Collections are grouped together in databases.

{
"field1": 1,
"field2": "string",
"field3": { "field1": "value1", "field2": "value2" },
"field4": [1, 2, 3]
}

You can read more about the data modeling concepts in the Documents section of the docs.

As Tigris is a multi-tenant system, when a user is created they are assigned to a tenant. All of their data is then stored under this tenant. Thus, the hierarchy of data storage looks something like this

Data storage hierarchy in Tigris

Data layout​

As Tigris leverages FoundationDB as the storage engine, which exposes a key-value interface, the data has to be stored as key-value pairs. This means there needs to be some translation from a logical layout of storing tenants, databases, collections, documents, and schemas to a physical layout.

Tigris maintains different key layouts depending on the information it stores. Each key layout has a custom encoder-decoder and has a prefix at the start of the key. The encoder adds this prefix; then, the decoder uses it to decode it according to the appropriate structure. The high-level concept of key encoding remains the same for all types of data (users or system data). The following section discusses how user data (JSON documents) are stored inside Tigris.

Key encoding​

As we have seen in the data modeling section, a collection is identified by a tenant, database, and collection name. This is the minimum information we need in the key to identify a record. However, a collection may have secondary indexes as well. Therefore, an index identifier must also be part of the key.

This key structure is made extensible by having a version component allowing us to add or remove attributes in the future.

To summarize, we need to pack the following information in the key:

tenant | database | collection | index name | index value

Taking a more realistic example, let's say we have a tenant foobar with a database userdb, a collection users, and an id field defined as the primary key of the documents. This translates to the following key structure

["fooApp", "userdb", "users", "pkey", [1]] =>
{"id": 1, "email": "alex@example.com", "phone_number": 12345}

The index values are seen as an array here because Tigris supports composite indexes as well, meaning a collection can have one or more than one field defined as index fields. These index values are packed in a single binary structure.

However, storing this information in every key as-is means unnecessary costs attached to each document. Therefore we implemented key compression.

Key compression​

The key compression algorithm that we have implemented replaces these long strings with integers and ensures that the integer assignments are unique. This is accomplished by persisting these mapping in an internal metadata collection and assigning to these strings during creation time.

In order to compress a key, the first step is to assign a unique integer to the container (tenant, database, collection) names. This is done during a Data Definition Language (DDL) operation. Whenever Tigris receives a tenant creation, database creation, or collection creation request, it starts a transaction. In this transaction, a unique integer value is reserved for this container name and then assigned to this container. This mapping is stored in an internal collection called encoding. The value assigned is incremented in a fashion that each mapping is unique. This mapping is immutable so that it can be freely cached on Tigris servers. In other words, all the user metadata like a tenant, database, collection, and index names are uniquely identifiable by their corresponding integer representation. As this is done in a transaction, completing the request means a unique assignment of the integer to the string.

Using integers has the following benefits:

  • Machine-level instruction can be used to perform integer comparison in one cycle
  • Packed as 4bytes; on the other hand, if the string size grows, it will take up a lot of memory. This results in compact keys, therefore, optimizing CPU, memory, and storage usage
  • Even nominally saving a few bytes per string representation does add up when there are billions of records

The encoding collection that has this information and has the following layout:

encoding | version | tenant-name => integer identifier
encoding | version | tenant-id | database-name => integer identifier
encoding | version | tenant-id | database-id | collection-name => integer identifier

Where

  • encoding β†’ identifier of this metadata collection (key layout)
  • version β†’ version of this key structure so that we can evolve this layout
  • tenant-name, database-name, collection-name β†’ The user-facing names

These integer values are then used to form the key. Now with this information, here is how the key structure looks like:

["usr",0x01,0x01,0x02,0x03,0x04,[1]] =>
{"id": 1, "email": "alex@example.com", "phone_number": 12345}

Where

  • usr β†’ the identifier for this key layout
  • 0x01 β†’ version of this key layout
  • 0x01 β†’ tenant-id
  • 0x02 β†’ database-id
  • 0x03 β†’ collection-id

Value encoding​

Internally all user values are stored inside a protobuf message. We don't mutate user payload. Introducing this top-level structure allows us to have metadata of the record attached to it along with raw user content. As an example, this protobuf message, apart from user data, has information such as compression, created time, updated time, the schema version, and other housekeeping fields. Some of this information is then indexed so that we can support time series queries like returning records that are created after Jan 1, 2022.

message ValueWrapper {
int32 ver = 1; // schema version
int32 enc = 2; // encoding of the raw data
int32 comp = 3; // compression of the raw data
Timestamp created_at = 4; // created timestamp of the document
Timestamp updated_at = 5; // updated timestamp of the document
bytes raw = 6; // raw user payload
}

Our wrapper adds a slight overhead to every value, but provides us with much flexibility. For example, we can switch between different compression algorithms and value encodings on the fly with no downtime or backfills required. In addition, unused fields don't have to be encoded at all.

Schemas​

For Tigris, schemas are an integral part and form the basis of rich features such as automatic indexing and real-time search. Tigris enforces that all documents in a collection must conform to a schema. The schema is defined as part of creating the collection. Tigris, then, provides a lightweight way to modify the schema.

This aligns well with application development flow as schemas provide a way to structure the data according to the application logic and flexibility in modifying the schema allows for it to evolve with the application.

Schema storage​

To store schemas, there is an internal metadata collection. All operations on this collection are performed using transactions. This allows us to create a single collection, or multiple collections, or evolve a schema of collections atomically.

Tigris supports performing DDLs in an interactive transaction. Once this transaction is committed, Tigris guarantees that this schema will be atomically applied and any new request will see the latest schema. The schema change is applied in an online manner, and there is no downtime, so this operation can be safely done in production during live traffic. Since schema change for Tigris is a metadata operation, the schema change can be completed in milliseconds.

As mentioned before, there is an internal collection that stores the schemas. The key is of the following structure:

schemas | version | tenant-id | database-id | collection-id | revision-id

Where

  • schemas β†’ identifier for this collection
  • version β†’ version of this key structure so that we can evolve this layout
  • tenant-id β†’ tenant integer identifier
  • database-id β†’ database integer identifier
  • collection-id β†’ collection integer identifier
  • revision-id β†’ version of the schema

As an example

["schemas", 0x01, 0x01, 0x02, 0x03, 0x01] =>
{"properties":{"a":{"type":"integer"},"primary_key":["a"]}}

The schemas are cached on the Tigris servers to avoid reloading them from the storage but as we provide transactional schema updates we need to ensure the cache is always consistent and never return stale schema version. The following section talks about how these schemas are propagated atomically and how the cache always returns a consistent view of the schemas.

How are schema changes propagated instantaneously?​

One of the ways of atomically propagating schema change is for all the servers to block the transactions until all other servers have received the new schema. However, in a production environment, this behavior is highly undesirable.

Tigris has chosen a different route to avoid this coordination but still support atomic schema changes. As Tigris transactionally stores the schema, in the same transaction, it bumps up a metadata version using FoundationDB's versionstamp.

Now any request that arrives must be checked against this version.

How schema propagation happens in Tigris

Checking metadata versions at the start of every transaction is an expensive operation so Tigris has optimized this flow by attaching a FoundationDB futures to it.

FoundationDB provides asynchronous APIs that rather than blocking the calling thread until the result is available, immediately return a future object. Using futures helps us to not penalize the request and let it proceed as usual.

Once the request is processed and is ready to be committed, this future is ready. Before committing the transaction we check if the metadata version has changed. If the version is bumped for a tenant that means a schema change operation has been performed so the server reloads the latest version of the schema for this tenant otherwise the request can proceed as-is.

This guarantees atomicity of schema propagation across the Tigris servers. This is also how Tigris provides the guarantee that once success is returned for a schema change operation, the new schema is always used for subsequent requests.

Schema enforcement​

As mentioned above, Tigris enforces that all documents in a collection must conform to a schema. Schema defines all the fields that make up the document in a collection, and is required as part of creating the collection.

The schema validation happens during all write requests. Validating during writes means there is no penalty during read operations as the stored data already conforms to the schema. Tigris allows evolving schema but with some restrictions at this time.

Read the following sections in the documentation to learn more about how schemas work in Tigris:

This is the first blog post in a multi-part series where we will be sharing the details of how we have implemented some of the core features in Tigris. Be on the lookout for the next part!


Tigris is the data platform built for developers! Use it as a scalable, ACID transactional, real-time backend for your serverless applications. Build data-rich features without worrying about slow queries or missing indexes. Seamlessly implement search within your applications with its embedded search engine. Connect serverless functions with its event streams to build highly responsive applications that scale automatically.

Tigris at Github

Checkout the quickstart, give us a star on Github, and join us on slack.