Pinecone-Serverless

Pinecone-Serverless

It can deal with high-dimensional vector data at a higher scale, easy integration, and faster query results. Pinecone provides a reliable, and faster, option for searching at a higher scale.

  • Pinecone Serverless is a managed service offering of Pinecone that allows users to deploy and run Pinecone's vector database in a serverless manner.
  • This means that users don't need to manage infrastructure, such as servers or scaling, themselves. Instead, they can focus on developing their applications and let Pinecone Serverless handle the underlying infrastructure.
  • By using Pinecone Serverless, developers can integrate the power of Pinecone's vector database into their applications without worrying about the operational overhead of managing infrastructure.

5 reasons to build with Pinecone serverless

1. Lower your costs by up to 50x

Memory efficient retrieva
Intelligent query-planning
Separation of storage and compute

2. Forget about configuring or managing your index

Pinecone serverless simplifies startup and scaling.As a fully serverless architecture,
you're relieved of database management and scaling concerns. No pods, replicas, or
resource sharding to configure. Just name your index, load data, and start querying via
API or client.

3. Make your applications more knowledgeable

Increasing the amount of data or knowledge in your vector database leads to more
relevant outcomes. Our research on the impact of Retrieval Augmented Generation
(RAG) confirms that expanding the scope of searchable data enhances fidelity, ensuring
results are more factually accurate. Even with a dataset reaching billions in scale,
incorporating all available data consistently improves performance, regardless of the
chosen LLM (Large Language Model).

Developers aiming to construct highly informed GenAI applications require a robust
vector database capable of navigating vast and continuously expanding pools of vector
data. Pinecone serverless offers precisely this capability. With the serverless
infrastructure, companies can seamlessly integrate practically limitless knowledge into
their applications.

Pinecone serverless is equipped with support for namespaces, real-time index updates,
metadata filtering, and hybrid search functionalities. This ensures that users obtain the
most relevant results, irrespective of the nature or magnitude of their workload. Explore
further to understand how our innovative new architecture sustains optimal
performance at scale.

4. Connect to your favorite tools

Pinecone partnered with best-in-class GenAI solutions to provide a serverless
experience that is the easiest to use. See how these partners — Anyscale, Cohere,
Confluent, Langchain, Pulumi, and Vercel — can help you or your engineering team get
started on serverless.


5. Build like the world’s leading companies

Pinecone has emerged as the top pick among developers crafting GenAI applications.
Through pinecone integration of serverless technology, we've streamlined usability and
scalability even further. Serverless functionality has paved the way for companies to
develop significantly enhanced GenAI applications, a fact underscored by endorsements
from industry leaders such as Notion, Gong, and DISCO.


What does the future hold for vector databases?

Pinecone serverless was designed to solve some of the most challenging problems with vector databases, such as freshness, elasticity, and cost at scale. We have made significant headway into meeting these challenges, but at the same time, we are only scratching the surface of research and development in vector databases.

Vector databases are fundamentally different from traditional databases, and at the same time, they inherit some of the challenges of traditional databases.

Traditional databases deal with structured data and are good at that. Such databases have by now perfected the techniques of organizing data so that you can access fresh and accurate results at scale. This capability is possible only because the data and queries are highly structured and restrictive.

Vector databases are inherently different: vector embeddings don’t have the same structure as traditional columns in databases do. Likewise, the problem of searching for the nearest neighbors of a query in your vector database is fundamentally computationally intensive in anything but the smallest databases. It is a mistake to think of a vector database as just another data type with just another index in an SQL/ NoSQL database. For precisely this reason, every such attempt to naively integrate vectors into traditional databases is doomed to fail at scale.

Every vector database returns an approximate list of candidates for such a search. The only difference is that a vector database provides the best cost to search quality tradeoff.

In the pod-based architecture, we took the approach that we always want to provide the highest possible search quality, even if it means, in some cases, the costs can be higher. Everyone else has taken the approach that the user must figure out how to tune vector databases to get to a high enough recall. In fact, not a single vector database benchmark out there even talks about recall and the cost of achieving a certain recall on any given benchmark.

We think it is time the situation changes: after all, a vector database’s primary purpose is to provide high-quality search results in a cost-effective manner. If we don’t have a simple and intuitive way to present this tradeoff to users without expecting them to become performance engineers on vector search, then we fail as a cloud-native database. At Pinecone, we measure and obsess over providing the best user experience and the best search quality economically at scale, which motivates the architectures we choose and the design tradeoffs we make.

With Pinecone serverless, we have taken a big step in the direction of the future of vector databases. But we expect to rapidly continue delivering on that vision with continued scale, elasticity, and search quality improvements to cost tradeoff.


Setting Up Pinecone Serverless: A Step-by-Step Guide

Are you looking to leverage the power of Pinecone Serverless for your machine learning and AI applications but unsure where to start? In this guide, we'll walk you through the process of setting up Pinecone Serverless, from signing up to deploying your application.

1. Sign Up for Pinecone -

First things first, head over to the Pinecone website and sign up for an account if you
haven't already. The signup process is straightforward and requires providing some basic
information along with agreeing to the terms of service.

2-Create an index -

Follow instructions from Pinecone on setting up your serverless index.

Followinstructions from Pinecone on setting up your serverless index.

(Add unlimited knowledge to your AI applications)

  • Create a serverless index -
    To create a serverless index, import the ServerlessSpec class and use the spec parameter to define the cloud and region where the index should be deployed:
from pinecone import Pinecone, ServerlessSpec

pc = Pinecone(api_key="YOUR_API_KEY")

pc.create_index(
  name="serverless-index",
  dimension=1536,
  metric="cosine",
  spec=ServerlessSpec(
    cloud="aws",
    region="us-east-1"
  )
    )
  • Create a pod-based index-

To create a pod-based index, import the PodSpec class and use the spec parameter to
define the environment where the index should be deployed, the pod type and size to
use, and other index characteristics:

from pinecone import Pinecone, PodSpec

pc = Pinecone(api_key="YOUR_API_KEY")

pc.create_index(
  name="pod-index",
  dimension=1536,
  metric="cosine",
  spec=PodSpec(
    environment="us-west1-gcp",
    pod_type="p1.x1",
    pods=1
  )
)

3-Install Pinecone Client Library-

from urllib.parse import urlparse
from langchain_openai import OpenAIEmbeddings
from langchain.docstore.document import Document
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.chat_models import ChatOpenAI
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from langchain_community.vectorstores import Pinecone
from pinecone import Pinecone as PineconeClient
import os
import shutil

4- API keys-

Ensure these are set:

  • PINECONE_API_KEY
  • PINECONE_ENVIRONMENT
  • PINECONE_INDEX_NAME
  • OPENAI_API_KEY

5- Importing and Initialization -

Initialize connection of pinecone with region:-

  pinecone =   PineconeClient(api_key="PINECONE_API_KEY",
                     environment="PINECONE_REGION")

Initiate embeddings:-

  embeddings = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY)

Generate embedding and indexing of docs_chunks and store in pinecone:-

pinecone_index = PC.from_texts(docs_chunks, embeddings,
      index_name=index_name)

Accelerating Serverless Applications with Vector Indexing:-

vectorstore = Pinecone.from_existing_index(
        index_name=index_name, embedding=embeddings)

create llm process chain:-

llm = OpenAI(temperature=0, openai_api_key='OPENAI_API_KEY')
chain = load_qa_chain(llm, chain_type="stuff")

Define the Query:-

query = "what is bot?"

Vector - similarity Search:-

docs = vectorstore.similarity_search(query,k=3)

SIMILARITY :-

Output = chain.run(input_documents=docs, question=query)
print(Output)


Delete a INDEX:-

  index = pinecone.Index(index_name)
  index.delete(delete_all=True)

Read more