ent-framework
  • Ent Framework
  • Getting Started
    • Code Structure
    • Connect to a Database
    • Create Ent Classes
    • VC: Viewer Context and Principal
    • Ent API: insert*()
    • Built-in Field Types
    • Ent API: load*() by ID
    • N+1 Selects Solution
    • Automatic Batching Examples
    • Ent API: select() by Expression
    • Ent API: loadBy*() Unique Key
    • Ent API: update*()
    • Ent API: deleteOriginal()
    • Ent API: count() by Expression
    • Ent API: exists() by Expression
    • Ent API: selectBy() Unique Key Prefix
    • Ent API: upsert*()
    • Privacy Rules
    • Validators
    • Triggers
    • Custom Field Types
  • Ent API: Configuration and Types
  • Scalability
    • Replication and Automatic Lag Tracking
    • Sharding and Microsharding
    • Sharding Terminology
    • Locating a Shard and ID Format
    • Sharding Low-Level API
    • Shard Affinity and Ent Colocation
    • Inverses and Cross Shard Foreign Keys
    • Shards Rebalancing and pg-microsharding Tool
    • Connection Pooling
  • Advanced
    • Database Migrations and pg-mig Tool
    • Ephemeral (Symbol) Fields
    • Atomic Updates and CAS
    • Custom Field Refactoring
    • VC Flavors
    • Query Cache and VC Caches
    • Loaders and Custom Batching
    • PostgreSQL Specific Features
    • Query Planner Hints
    • Cluster Maintenance Queries
    • Logging and Diagnostic Tools
    • Composite Primary Keys
    • Passwords Rotation
  • Architecture
    • Abstraction Layers
    • Ent Framework, Meta’s TAO, entgo
    • JIT in SQL Queries Batching
    • To JOIN or not to JOIN
Powered by GitBook
On this page
  • Ent.upsert(vc, { field: "...", ... }): string
  • Ent.upsertReturning(vc, { ... }): Ent
  • Batching

Was this helpful?

Edit on GitHub
  1. Getting Started

Ent API: upsert*()

Among all Ent API calls, upsert*() is probably the most limited one. This is due to it being optimized for batching: in SQL, there are limitations on what can be achieved in this area. (Some restrictions may later be lifted though.)

Ent.upsert(vc, { field: "...", ... }): string

This call tries to insert a new row in the database. In case there was a "duplicate key" error (i.e. if a row the unique key defined in Ent schema exists), updates that existing row.

Returns ID of the updated (or inserted) row.

Don't use upsert() too often, because it rolls the ID sequence on the Ent even when the object is updated, not inserted. E.g. if you upsert the same exact row with the same exact field values a billion times, then the DB sequence backing the table will be incremented one billion times, even though the row remains unchanged. There is currently no good ways to solve this, although it may appear in the future.

Upsert can't work if some triggers are defined for the Ent, because we don't know Ent ID in advance (whether the upsert succeeds or skips on duplication).

Also, upsert() will refuse to run if there are Inverses defined on some Ent fields (same reason: Inverses operations run in a different microshard strictly before the main Ent operation, and they must know the row's ID in advance).

Currently, upsert() ignores autoInsert fields when it updates an existing row (such fields are only populated on the very first insert). Fields with autoInsert are not updated, even if their values are explicitly provided in the input row. Partially, the motivation here is that don't want e.g. created_at to be updated on each upsert() call (and it's an autoInsert field). And partially, it's just the current implementation limitation.

All of the above issues may be solved in the future.

Ent.upsertReturning(vc, { ... }): Ent

This call is very similar to upsert(), but in the end, it loads the updated (or inserted) Ent back from the datbase using loadX().

Since upsert() is meant to always succeed (except when there is a transport error, or when some database constraint check unrelated to the main Ent's unique key fails), there are no "X" and "Nullable" variations of this method.

Batching

Multiple upsert*() calls running in parallel are batched by Ent Framework:

await Promise.all([
  EntTopic.upsert(vc, { 
    slug: "s1",
    creator_id: "123",
    subject: "test1",
  }),
  EntTopic.upsert(vc, {
    created_at: new Date("2020-01-01"),
    slug: "s2",
    creator_id: "456",
    subject: "test2",
  }),
]);

Notice that in the above example, we explicitly passed a value "2020-01-01" to an autoInsert field created_at to illustrate the problem with updating it.

Here is the batched query:

INSERT INTO topics(id, created_at, updated_at, slug, creator_id, subject) VALUES
  (id_gen(), now(), now(), 's1', '123', 'test1'),
  (id_gen(), '2020-01-01', now(), 's2', '456', 'test2')
ON CONFLICT (slug) DO UPDATE SET
  updated_at=EXCLUDED.updated_at,
  slug=EXCLUDED.slug,
  creator_id=EXCLUDED.creator_id,
  subject=EXCLUDED.subject
RETURNING id

We see that in this batched query, the first VALUES row has created_at equals to now() (automatically generated by PostgreSQL on each run), and the second row has 2020-01-01' there. It explains why there is no created_at=EXCLUDED.created_at part in the ON CONFLICT DO UPDATE clause: by the time the query runs, we don't know whether a row processed by ON CONFLICT correspond to case 1 in VALUES (where we should not touch it) or to case 2 (where we should update it).

PreviousEnt API: selectBy() Unique Key PrefixNextPrivacy Rules

Last updated 5 months ago

Was this helpful?