Import records
This page shows you how to import records from Amazon S3 or Google Cloud Storage into an index. Importing from object storage is the most efficient and cost-effective way to load large numbers of records into an index.
To run through this guide in your browser, see the Bulk import colab notebook.
This feature is in public preview and available only on Standard and Enterprise plans.
Before you import
Before you can import records, ensure you have a serverless index, a storage integration, and data formatted in a Parquet file and uploaded to an Amazon S3 or Google Cloud Storage bucket.
Create an index
Create a serverless index for your data.
- Import does not support integrated embedding, so make sure your index is not associated with an integrated embedding model.
- Make sure your index is on the same cloud as your object storage.
- You cannot import records into existing namespaces, so make sure your index does not have namespaces with the same name as the namespaces you want to import into.
Add a storage integration
To import records from a secure data source, you must create an integration to allow Pinecone access to data in your object storage. See the following guides:
To import records from a public data source, a storage integration is not required.
Prepare your data
For each namespace you want to import into, create a Parquet file and upload it to object storage.
Dense index
To import into a dense index, the Parquet file must contain the following columns:
Column name | Parquet type | Description |
---|---|---|
id | STRING | Required. The unique identifier for each record. |
values | LIST<FLOAT> | Required. A list of floating-point values that make up the dense vector embedding. |
metadata | STRING | Optional. Additional metadata for each record. To omit from specific rows, use NULL . |
The Parquet file cannot contain additional columns.
For example:
Sparse index
To import into a sparse index, the Parquet file must contain the following columns:
Column name | Parquet type | Description |
---|---|---|
id | STRING | Required. The unique identifier for each record. |
sparse_values | LIST<INT> and LIST<FLOAT> | Required. A list of floating-point values (sparse values) and a list of integer values (sparse indices) that make up the sparse vector embedding. |
metadata | STRING | Optional. Additional metadata for each record. To omit from specific rows, use NULL . |
The Parquet file cannot contain additional columns.
For example:
Dense index
To import into a dense index, the Parquet file must contain the following columns:
Column name | Parquet type | Description |
---|---|---|
id | STRING | Required. The unique identifier for each record. |
values | LIST<FLOAT> | Required. A list of floating-point values that make up the dense vector embedding. |
metadata | STRING | Optional. Additional metadata for each record. To omit from specific rows, use NULL . |
The Parquet file cannot contain additional columns.
For example:
Sparse index
To import into a sparse index, the Parquet file must contain the following columns:
Column name | Parquet type | Description |
---|---|---|
id | STRING | Required. The unique identifier for each record. |
sparse_values | LIST<INT> and LIST<FLOAT> | Required. A list of floating-point values (sparse values) and a list of integer values (sparse indices) that make up the sparse vector embedding. |
metadata | STRING | Optional. Additional metadata for each record. To omit from specific rows, use NULL . |
The Parquet file cannot contain additional columns.
For example:
In object storage, the directory structure determines the namespaces that will be created and the data that will be imported into them. The path must begin with the bucket, followed by an import folder, followed by a sub-directory for each namespace you want to import.
For example, if you want to import data into the namespaces example_namespace1
and example_namespace2
, your directory structure must look like this:
Each import request can import up 1TB of data, or 100,000,000 records into a maximum of 100 namespaces, whichever limit is met first.
Import records into an index
Review current limitations before starting an import.
Use the start_import
operation to start an asynchronous import of vectors from object storage into an index.
-
For
uri
, specify the URI of the bucket and import directory containing the namespaces and Parquet files you want to import, for example,s3://BUCKET_NAME/IMPORT_DIR
for Amazon S3 orgs://BUCKET_NAME/IMPORT_DIR
for Google Cloud Storage. -
For
integration_id
, set the Integration ID of the Amazon S3 or Google Cloud Storage integration you created. The ID is found on the Storage integrations page of the Pinecone console.An Integration ID is not needed to import from a public bucket.
-
For
error_mode
, useCONTINUE
orABORT
.- With
ABORT
, the operation will stop if any records fail to import. - With
CONTINUE
, the operation will continue on error and complete, but there will not be any notification about which records, if any, failed to import. To see how many records were successfully imported, use thedescribe_import
operation.
- With
Each import request can import up 1TB of data, or 100,000,000 records into a maximum of 100 namespaces, whichever limit is met first.
The response contains an operation_id
that you can use to check the status of the import:
Once all the data is loaded, the index builder will index the records, which usually takes at least 10 minutes. During this indexing process, the expected job status is InProgress
, but 100.0
percent complete. Once all the imported records are indexed and fully available for querying, the import operation will be set to Completed
.
You can start a new import using the Pinecone console. Find the index you want to import into, and click the ellipsis (..) menu > Import data.
Manage imports
List imports
Use the list_imports
operation to list all of the recent and ongoing imports. By default, the operation returns up to 100 imports per page. If the limit
parameter is passed, the operation returns up to that number of imports per page instead. For example, if limit=3
, up to 3 imports are returned per page. Whenever there are additional imports to return, the response includes a pagination_token
for fetching the next page of imports.
When using the Python SDK, list_import
paginates automatically.
You can view the list of imports for an index in the Pinecone console. Select the index and navigate to the Imports tab.
When using the Python SDK, list_import
paginates automatically.
You can view the list of imports for an index in the Pinecone console. Select the index and navigate to the Imports tab.
When using the Node.js SDK, Go SDK, .NET SDK, or REST API to list recent and ongoing imports, you must manually fetch each page of results. To view the next page of results, include the paginationToken
provided in the response of the list_imports
/ GET
request.
Describe an import
Use the describe_import
operation to get details about a specific import.
You can view the details of your import using the Pinecone console.
Cancel an import
The cancel_import
operation cancels an import if it is not yet finished. It has no effect if the import is already complete.
You can cancel your import using the Pinecone console. To cancel an ongoing import, select the index you are importing into and navigate to the Imports tab. Then, click the ellipsis (..) menu > Cancel.
Limitations
- Import does not support integrated embedding.
- Import only supports Amazon S3 or Google Cloud Storage as a data source.
- You cannot import data from S3 Express One Zone storage.
- You cannot import data into existing namespaces.
- Each import request can import up 1TB of data into a maximum of 100 namespaces. Note that you cannot import more than 10GB per file and no more than 100,000 files per import.
- Each import will take at least 10 minutes to complete.