Here is a valid CQL expression:
user's age is greater than 30 and user's interests include "sports"You can read it out loud and it makes sense. Your product manager can read it. Your newest team member can read it. An LLM can understand and write it with no fine-tuning.
CQL stands for Contextual Query Language. (Internally, some of us insist it stands for Croct Query Language. This debate has not been resolved and may never be.) It was designed with an unusual constraint: it has to be writable by people who have never seen a line of code. But it was also designed for something more ambitious than a configuration format. Think of it as a real-time question-answering API for the user's world. You do not pre-compute results into a database and look them up later. You ask questions, live, and get answers in real time.
Is the user on a mobile device? Is it raining where they are? Have they been to this page before? Did they abandon their cart last week? You ask, CQL answers. Each query is a question about the present moment, evaluated against the full context of who the user is, where they are, what they are doing, and what has happened before. No batch jobs. No ETL pipelines. No stale caches. Just a question and an answer, right now.
That "right now" hides an enormous amount of machinery. Think about what it takes to answer user has shown interest in "smartphones" in real time. Somewhere, events are being collected from browsers and servers. Those events need to be attributed to the right user, which means resolving identity across anonymous sessions, logged-in accounts, and device switches. The events need to be sessionized in real time, stitched into a coherent timeline as they arrive out of order from different sources. Profile attributes, cart state, behavioral history, location data: all of it needs to be materialized into a consistent view of the user, one that is fresh enough to be useful but resilient enough to handle the inevitable delays and failures of distributed systems. And the answer needs to come back in under 80 milliseconds, because anything slower and the page has already rendered with the wrong content.
Behind the scenes, there are layers of fallback for eventual consistency. A user opens your site from a coffee shop in Tokyo. The browser reports a GPS coordinate. That coordinate gets resolved to a city, a country, a timezone, a weather condition. By the time a CQL query asks whether the user is browsing from a beach town or a financial district, the system needs that location resolved and ready, or it needs to gracefully handle the fact that it is not available yet. All of that, the stream processing, the identity resolution, the real-time aggregation, the fallback strategies to keep things consistent when the world is not, just so someone can write a five-word query and get an answer.
And CQL abstracts all of it away. The person writing user has shown interest in "smartphones" does not know about event pipelines. They do not know about identity graphs or session boundaries or consistency models. They asked a question. They got an answer. The complexity is real, but it is not theirs to carry.
Now consider what happens when a single query reaches across both sides of that divide. A query that mixes what is happening right now in the browser with what happened weeks ago on the server:
page's path is like "catalog" and user has not ordered a product with name "Nike Air Max"Read that out loud. It is almost conversational. But think about what it is actually asking. "Page's path is like 'catalog'" is client-side, immediate. It reads the current URL from the browser tab the user is looking at right now. "Has not ordered a product with name 'Nike Air Max'" is historical. It reaches into the user's entire purchase history, across every session, every device, potentially spanning months or years of transactions stored on the server.
One query. Two completely different data sources. Two completely different latency profiles. The page path is local, sub-millisecond, available in the browser without a network call. The purchase history requires a round-trip to a backend, scanning millions of order events that could span months or years. And CQL treats them as if they were the same thing, because to the person writing the query, they are the same thing. They are just facts about the user's world.
The evaluation engine has to orchestrate across client and server contexts, merge the results, and return a single boolean. The person writing the query does not choose where the data lives. They do not configure a data source. They just ask. The infrastructure figures out the rest.
Simple syntax, serious language
CQL is not just plain English with a compiler strapped to it. It is a proper expression language with a formal grammar, a concrete syntax tree, variables, functions, quantifiers, temporal arithmetic, and a selector system for navigating nested data. You can write the same expression in a more compact form:
user.age > 30 and user.interests include "sports"Both forms parse to the same tree. The natural language syntax and the symbolic syntax are interchangeable. This is deliberate. CQL sits at the intersection of two worlds: marketers who think in words and developers who think in operators.
Where does CQL actually run? Everywhere. In audience targeting rules that decide which users see a campaign. In content slots that personalize pages in real time. In React components:
const isDeveloper = useEvaluation<boolean>("user's persona is 'developer'");And on the server:
const eligible = await evaluate("user's plan is 'premium' and user's cart's total > 50");CQL does not operate on static tables. It is not SQL. There are no JOINs, no SELECT statements, no transactions. CQL operates on live, dynamic user context: profile attributes, behavioral history, cart contents, session state, temporal data like the current time and day of the week. Every evaluation is a snapshot of a moving target.
And beneath the friendly syntax, there is real complexity. CQL has quantifiers:
some item in user's cart's items satisfies item's price > 100It has temporal arithmetic that understands calendars:
today plus 3 weekdaysThat expression knows that Friday plus three weekdays is Wednesday, not Monday. It skips weekends. The language understands what a weekday is.
It has functions, selectors that navigate nested data structures, and a full concrete syntax tree with over 70 node types. The parser handles possessive forms (user's cart), prepositional forms (count of items), and infix operators (a > b) all in the same grammar. Both user's age is not less than 18 and user.age >= 18 parse, and they parse to the same tree.
The language is readable. That was the goal. And for a while, readability felt like enough.
What if the biggest challenge was not writing a query, but truly understanding what it means before it runs?
CQL is readable. But readability is for humans. What about machines?
What does it mean to understand a query?
The CQL engine evaluates millions of queries in real time. Every query is a promise. An audience rule promises: "I will return a boolean." A slot expression promises: "I will return the right content for this user." A condition promises: "I will be true or false, and I will be correct."
But what if the types do not match? What if a query promises a boolean but actually produces a string? What if a condition combines two constraints that can never both be true? What if someone writes user's age > 5 and user's age < 3 and nobody notices?
These are not hypothetical concerns. In personalization, errors are invisible. There is no stack trace. There is no crash report. The user simply sees the wrong content. The A/B test silently produces garbage data. The targeting rule matches nobody or everybody, and the only signal is a metric that looks slightly off three weeks later.
Seed these questions. Can we know the return type of a query before it runs? Can we detect that user's age > 5 and user's age < 3 is a contradiction, that no user will ever satisfy both conditions simultaneously? Can we resolve count of [1, 2, 3] to the number 3 without running anything? Can we know which pieces of user state a query depends on, so we know exactly when to re-evaluate it and when to leave it alone?
These are not academic questions. They have direct business consequences. A contradiction in an audience rule means a segment with zero members. A type mismatch in a slot expression means the wrong content shown to every visitor. A query that depends on the cart but gets re-evaluated on every page view wastes compute and adds latency. And none of these problems announce themselves. They hide in the gap between what the query says and what it does.
Every query is a promise. But who checks the promises?
We needed something that could reason about queries the way a human would, tracing types through operations, detecting contradictions, understanding dependencies. But faster, and without mistakes.
Introducing the CQL static analyzer
Before we get into the details, a brief detour into type theory.
Every value has a type. The number 5 is an integer. The text "hello" is a string. The value true is a boolean. Types are categories. They tell you what operations make sense: you can add integers, concatenate strings, negate booleans. You cannot add a string to a boolean. That is a type error.
A type system is a set of rules for assigning types to expressions and checking that the operations are valid. Most type systems work at the level of categories: this is an integer, that is a string, done. But types can be more precise. Instead of just "integer," you can say "integer between 0 and 100." Instead of just "string," you can say "a non-empty string." The more precise the types, the more the system can reason about what your expression actually does.
If you remember sets from school, you already have the intuition. Types are sets of possible values, and type theory gives us an algebra for working with them. A union of two types is like the union of two sets: all the values that belong to either one. An intersection narrows it down: only the values that belong to both. "Integer greater than 5" is a subset of "integer." "Integer greater than 5 and less than 3" is the empty set, no values at all. The same operations you drew with Venn diagrams, applied to the possible values of an expression.
The CQL static analyzer is built on this algebra. It walks the concrete syntax tree of a query and infers a type at every node. An exact type. A type that carries as much information as the expression allows.
When the analyzer sees user's age > 18, it does not just produce boolean. It narrows age from integer to age ≥ 19 in the branch where the condition is true. It knows the comparison could go either way, but it also knows what the world looks like if it is true.
When it sees every item in [1, 2, 3] satisfies item > 0, it does not stop at boolean. It can see every element. It can check each one. The result is the constant true. Instead of evaluating this expression thousands of times per second and getting the same result every time, the analyzer deduces the answer once and the runtime never has to ask again. Instantaneous, zero cost, provably correct.
If you have used TypeScript, you already know this feeling. TypeScript narrows string | number inside an if (typeof x === 'string') block. It tracks control flow. It knows that after a type guard, the variable has a more specific type. Our analyzer does the same thing, but for a query language that operates on live user data instead of static variables.
The third answer
Most systems think in two values. True or false. Yes or no. Matches or does not match. This works fine when you have all the information. But a static analyzer does not have all the information. It is reasoning about expressions that will run later, against data that does not exist yet. It needs a way to say "I don't know."
The CQL analyzer uses three-valued logic. Every question it asks has three possible answers: yes, maybe, or no.
"Is 7 greater than 5?" Yes. The analyzer can see both values. The answer is certain.
"Is some unknown integer greater than 5?" Maybe. The integer could be 3 or it could be 300. The analyzer cannot tell without knowing the actual value, so it says so honestly.
"Is a string greater than 5?" No. Not maybe. Not "it depends." Strings and numbers are not comparable in CQL. The answer is definitively no.
This sounds simple, but it changes everything about how the analyzer reasons. Consider what happens when two conditions are combined with and. In two-valued logic, true and true is true, and anything involving false is false. In three-valued logic, yes and maybe is maybe. The uncertainty propagates. The analyzer will not claim certainty it does not have.
Now consider or. maybe or yes is yes. If one side is definitely true, the whole thing is true regardless of the other side. But maybe or maybe is still maybe. The analyzer tracks uncertainty through every logical operation, every comparison, every branch. It never rounds "I don't know" up to "yes" or down to "no."
This is what makes contradiction detection possible. When the analyzer sees user's age > 5 and user's age < 3, it does not just check whether each side could be true in isolation. It asks: given the constraint from the left side, can the right side still be true? The left side narrows age to age ≥ 6. The right side requires age ≤ 2. The intersection is empty. The answer goes from maybe to no. Not because the analyzer guessed, but because it proved it.
Three-valued logic is also what prevents false positives. When the analyzer cannot determine something, it says maybe and moves on. It does not flag uncertain results as errors. It does not claim an expression is wrong just because it cannot prove it is right. This is the difference between a useful tool and an annoying one.
This is not a new idea. But the constraints are new.
Prior art and related work
Static analysis and type inference have a long history, and we did not build this in a vacuum.
TypeScript set the standard for developer-facing type narrowing. Its control flow analysis, conditional types, and template literal types showed the world what a type system could do when it is willing to be aggressive about inference. TypeScript's influence is visible in our constraint propagation system, where conditions like age > 18 narrow the type of age in subsequent branches.
Flow, Meta's type checker for JavaScript, was an early innovator in flow-sensitive typing. It pioneered many of the ideas that TypeScript later popularized, including refinement types that change based on control flow. Flow demonstrated that you could have sound type narrowing in a dynamically typed language, which is exactly the problem CQL faces.
Rust's borrow checker operates in a different domain entirely, but the principle is the same: reason about runtime behavior without running the program. Rust proves memory safety at compile time. Our analyzer proves type safety and detects contradictions at analysis time. Different targets, same ambition: catch an entire class of errors before they happen.
But none of these systems had to deal with CQL's particular constraints.
TypeScript analyzes programs with explicit type annotations and fixed schemas. CQL operates on dynamic user attributes. A user might have a favoriteColor property that was created five minutes ago with no schema definition.
Flow and TypeScript analyze programs that sit still. CQL evaluates against live context that changes with every page view, every cart update, every passing second.
Rust reasons about a single program's memory. CQL reasons about queries that touch client state, server state, external data sources, and temporal data, all in the same expression.
And CQL was designed for non-programmers. It needs formal type guarantees for a language whose users may not know what a type is.
Building a static analyzer is hard. Building one for a language that operates on live, dynamic, schema-less user data? That is a different kind of problem.
So how does it actually work?
How the analyzer works
Let us walk through a CQL expression and watch the analyzer reason about it.
user's age > 18 and user's cart's total > 100The analyzer starts on the left. It looks up user in the current scope and finds a named object type, something like a dynamic schema that knows user has an age property of type integer, a cart property that resolves to another named object, and so on. It applies the property selector age and gets integer. Now it has the left side of the comparison.
The comparison integer > 18 cannot be resolved to a constant. An integer could be 5 or 500. The result is boolean with a certainty of maybe. But the analyzer extracts something valuable: a constraint. In the branch where this condition is true, age is not just integer anymore. It is age ≥ 19, an integer with a minimum value of 19. This constraint gets carried forward.
The right side follows the same pattern. The analyzer chains selectors: user's cart resolves to a cart object, cart's total resolves to float. The comparison > 100 produces another maybe, and another constraint: if true, total is total > 100.
Now the and operator merges both constraint sets and checks satisfiability. Can there exist an integer that is at least 19 and a float that is greater than 100? Yes. The constraints are compatible. The result is boolean.
But the analyzer does more than return a type. It forks reality. It constructs two parallel universes: one where the expression is true, and one where it is false. In the universe where the condition holds, age is age ≥ 19 and total is total > 100. In the universe where it does not, the types are different. In the false universe, things are trickier. If the whole expression is false, it could be because the age condition failed, or the total condition failed, or both. The analyzer cannot know which one, so it does not guess. It keeps the types wide rather than risk narrowing them incorrectly. Better to say "I don't know" than to be wrong.
The final result: boolean, with the knowledge that in the truthy branch, age ≥ 19 and total > 100.
Now let us see what happens when things get interesting. Take this expression:
user's age > 5 and user's age < 3The analyzer infers user's age as integer. The first comparison extracts a constraint: age in age ≥ 6. The second extracts another: age in age ≤ 2. The and merges them and checks satisfiability. The intersection of age ≥ 6 and age ≤ 2 is empty. No integer can be simultaneously greater than 5 and less than 3. The result is not boolean. It is false. A constant. The analyzer knows this expression will never be true, for any user, at any time. This is not a runtime check. This is a mathematical proof.
Functions that wait
Most people do not know this, but CQL has first-class functions. Inline, reusable recipes that take an input and produce an output. In programming, these are sometimes called lambdas. You can think of x => x + 1 as a machine with a slot: put a number in, get that number plus one out. Here it is being defined and immediately called with 41:
(x => x + 1)(41)A naive analyzer would look at this function, see that x has no type annotation, shrug, and say the result could be anything. Useless.
The CQL analyzer does something smarter. It waits. When the function is defined, the analyzer records its structure but does not try to figure out the return type. When the function is called with 41, the analyzer goes back in with x bound to the constant type 41. Now x + 1 is 41 + 1, which is integer. Not mixed, not number, not decimal.
The function's type is not determined when it is written, but when it is used. The actual input shapes the output. This is the same principle behind TypeScript's generic inference, and it is what allows the analyzer to be precise even when the expression is abstract.
The black hole of the type system
What happens when an operation is impossible? What is the type of user's nickname when no such property exists? The analyzer has a type for this: nothing. A black hole. Anything that touches it becomes nothing too. nothing + 1 is nothing. nothing > 5 is nothing. It swallows everything silently, propagating through the entire expression until a separate layer looks at the result and tells the user what went wrong.
What becomes possible
A static analyzer that understands types is useful on its own. But the real value comes not from what it knows, but from what it enables. Type information at every node of the syntax tree unlocks capabilities that would be impossible without it. Let us walk through them.
Query validation
The most immediate application: catching mistakes before they reach production.
Audience rules must return a boolean. That is the contract. But what if someone writes an expression that returns a string? Or a number? Without the analyzer, you find out at runtime, or worse, you never find out. The expression evaluates, the runtime coerces the result, and the audience rule silently misbehaves.
With the analyzer, the return type is known before the query ever runs. If it is not boolean, you know immediately. Not at deployment. Not at evaluation time. At authoring time.
But type checking is just the beginning. The analyzer detects contradictions:
user's age > 50 and user's age < 20This is always false. The analyzer knows. It can tell you before you save the rule.
It detects tautologies:
user's age > 18 or user's age <= 18This is always true. It matches every user. Probably not what you intended for a targeting rule.
It resolves quantifiers statically. every item in [1, 2, 3] satisfies item > 0 is true. The analyzer checked each element. No runtime needed.
It performs constant folding as a form of validation. count of [] > 5 is false. An empty collection has zero elements. Zero is not greater than five. The analyzer knows this expression will never match.
These might look like easily catchable errors. And with literal values, they are. But in practice, conditions are built piece by piece in a UI, by different people, at different times. The audience rule says user's age > minAge, and somewhere else minAge is set to 50. Another condition says user's age < maxAge, and maxAge is 20. Nobody sees them side by side. The analyzer does.
Language server and autocomplete
Type information at every node means something else: we can build a language server.
If you know the type of the expression to the left of the cursor, you know what properties are available. You know what operations make sense. You can offer autocomplete suggestions that are not just syntactically valid but semantically meaningful.
Hover over user's cart's total and the language server can tell you: float. Hover over user's age > 18 and it tells you: boolean, with age narrowed to age ≥ 19 in the truthy branch. Mistype a property name and you get a squiggly red line before you finish the expression.
There is a challenge here, and it is a significant one. CQL operates on dynamic attributes. Users can create custom profile attributes at any time: user's favoriteColor, user's loyaltyTier, user's lastPurchaseCategory. There is no fixed schema that defines all possible properties in advance.
The analyzer handles this through a reflection system that learns from your data. Built-in properties like age and email have known types. But custom attributes are auto-discovered from the events you track. Set a custom attribute to a string, and the reflection system learns it is a string. Later set it to a number, and the system updates the type to a union of string and number. No schema file to maintain. No configuration to update. The type information emerges from the data itself.
The result is an IDE experience for a query language. Autocomplete, hover information, error highlighting, go-to-definition for properties. The editor becomes a CQL teacher. It shows you what is possible, warns you when something is wrong, and helps you understand the types flowing through your expression.
For a language designed to be written by non-programmers, this matters more than it would for a conventional programming language. The people writing CQL queries may not know what age ≥ 19 means. But they understand a red underline that says "this condition can never be true."
Compiler and transpiler
Here is where things get exciting for the performance-minded.
If the analyzer knows the exact type of every expression, a compiler can use that information to emit specialized code. When the analyzer says 0 ≤ n ≤ 100, the compiler does not need to handle arbitrary-precision arithmetic. When the analyzer says 'HELLO', the compiler can replace the entire expression with a constant.
CQL currently runs through an interpreter. The query is parsed into a tree, and evaluators walk the tree node by node. This is flexible and fast, but not as fast as it could be. A compiler that leverages type information could emit native code that skips the tree walk entirely.
But compilation is only part of the picture. Type information also enables transpilation. The same CQL expression could be compiled to JavaScript for edge evaluation, keeping personalization logic close to the user. It could be compiled to SQL for batch processing, enabling audience computation over millions of users in a data warehouse.
Constant folding, powered by the analyzer, reduces the tree before compilation even starts. If the analyzer resolves 1 + 2 to 3, the compiler never sees the addition. If it resolves count of [1, 2, 3] > 0 to true, the compiler emits a constant. Less work for the compiler means faster output.
Consider this: now's hour > 18. This expression depends only on the current time. With type information, a transpiler could emit a JavaScript function that runs on the client without a server round-trip. The personalization happens at the edge, in milliseconds, with no network latency.
What would it mean to run a personalization query at the speed of native code, before a response even leaves the server?
Type generation
Today, when you use CQL in a React component, you specify the expected type manually:
const isDeveloper = useEvaluation<boolean>("user's persona is 'developer'");That <boolean> is a promise from the developer. "I believe this expression returns a boolean." The TypeScript compiler trusts you. If you are wrong, it will not catch it. If the expression actually returns a string, TypeScript is happy and the runtime is not.
With the analyzer, the type can be generated from the query itself. The analyzer knows the expression returns boolean. The generic parameter becomes unnecessary:
const isDeveloper = useEvaluation("user's persona is 'developer'");// ^? const isDeveloper: booleanThe type flows from the query to the variable without any manual annotation. This is not just convenient. It is correct by construction. The type cannot drift from the query because it is derived from the query.
And it gets more interesting with precise types:
const label = useEvaluation( "user's plan is 'premium' ? 'VIP' : 'Standard'");// ^? const label: 'VIP' | 'Standard'The analyzer knows the conditional returns either 'VIP' or 'Standard'. Not string. The exact union of constant string types. TypeScript can narrow on this. Your components can pattern-match on the precise values. The type system flows from CQL through the analyzer into TypeScript and out to your component logic, with no gaps.
Type generation saves keystrokes. But the next capability saves something more fundamental.
Reactivity
This is the heart of it. This is where static analysis stops being a developer tool and becomes runtime infrastructure.
Listen to the signs.
The CQL analyzer does not only infer types. It traces reactive signals. For every expression, it determines: what events in the outside world could change this expression's result?
The system tracks over 40 signal types, from time changes and profile updates to cart modifications, page navigations, completed orders, and reached goals. Forty signals from the real world, from browsers and servers and external data sources, each representing something that could change a query's result.
When the analyzer encounters user's cart's total, it does not just infer the type as a float. It also traces the dependency: this expression depends on the cart. If the cart changes, the expression might produce a different result.
The tracing is precise in a way that matters. In user's cart's total, the analyzer collects signals only from the leaf, the total. It does not collect signals from the user or the cart, because those are intermediate navigation steps, not the value being read. This is leaf-only collection, and it is what makes the dependency sets small and precise instead of bloated with irrelevant events.
The tracing respects control flow. If the analyzer determines that a branch is statically dead (because the condition is always true or always false), the signals from the dead branch are excluded. Why subscribe to cart events if the branch that reads the cart is never taken?
Zero-config reactivity.
This enables something that sounds simple but is architecturally profound: server-side reactive evaluation.
When the cart changes, the system knows exactly which queries depend on it. Only those queries re-evaluate. Queries that depend on the user profile do not re-evaluate. Queries that depend on the time do not re-evaluate. Queries that depend on nothing (because they are constant) never re-evaluate.
This is not polling. This is not "re-evaluate everything every N seconds." This is precise, event-driven invalidation powered by static analysis. The analyzer examines the query once, at authoring time, and produces a set of signals that governs the query's lifetime at runtime.
Let us see what this looks like in practice:
// Automatically re-renders when cart changes.// Analyzer traced: [cart changed]function CartBanner() { const showBanner = useEvaluation("user's cart's total > 100");
if (!showBanner) return null;
return <Banner>Free shipping on your order!</Banner>;}No polling. No timer. The hook would subscribe to exactly one signal: cart changed. The server could push updates through high-performance server-sent events, and the client would re-evaluate only when the right signal arrives. When the user adds an item to their cart, the signal fires, the query re-evaluates, and the component re-renders if the result changed. When the user updates their profile, nothing happens. When the session changes, nothing happens. The component reacts to exactly the events that matter.
Now consider a more complex case:
// Mixed sources: client + server + temporal data.// Analyzer traced: [time changed, profile changed, cart changed]function PersonalizedOffer() { const offer = useEvaluation( `user's age > 25 and cart's total > 50 and now's hour >= 18 ? 'Evening deal: 20% off!' : 'Check out our daily specials'` );
return <OfferBanner>{offer}</OfferBanner>;}Three signals. Time changed, because the expression reads the current hour. Profile changed, because it reads the user's age. Cart changed, because it reads the cart total. The component re-evaluates only when one of these three events fires. Not when the user scrolls. Not when a page opens. Not when a session attribute changes. Three signals, precisely traced from the expression.
This is where the architecture becomes genuinely new. Static analysis at authoring time producing runtime reactivity guarantees. The analyzer examines the query, derives the dependency set, and that dependency set governs how the query behaves for its entire lifetime. Build time and runtime, connected through type-level reasoning.
Think about the last time you built a real-time UI. How much of your code was just watching for changes? Setting up WebSocket listeners. Debouncing events. Reconciling state. Polling for updates. Managing subscription lifecycles.
What if the query itself told you what to watch?
What if the personalization layer knew, with mathematical precision, which events could change which results, and handled reactivity automatically?
That is what CQL's static analyzer enables. Not just knowing types. Knowing dependencies. And acting on them.
The scale of precision.
Consider 50 personalization queries running simultaneously. Without signal tracing, every event re-evaluates all 50. With it, a cart change triggers 3, a page open triggers 7, a profile update triggers 12. No developer had to think about dependencies. The analyzer figured it out.
This precision compounds. In a single-page application where the user browses for minutes, the difference between "re-evaluate everything on every event" and "re-evaluate only the affected queries" is the difference between a sluggish experience and a snappy one. The fewer unnecessary re-evaluations, the less work for the evaluator, the less data transferred, the fewer React re-renders. Static analysis at authoring time produces performance gains at runtime, automatically.
Dead branches, silent signals.
It goes further. If the analyzer proves a branch is unreachable, it drops its signals entirely. Say your product only has one plan right now. A conditional that checks the plan collapses to a constant, and the signals from the dead branch disappear. Constant folding feeds the signal tracer, which feeds the runtime. Each layer amplifies the others.
Type inference and reactive systems have both existed for decades. Using one to power the other, deriving precise event subscriptions from static analysis of a query language, is a combination we find genuinely exciting.
What comes next
The analyzer is built. Now we are building on top of it. But the direction that excites us most is a different one entirely.
Imagine CQL expressions that tap into AI predictions:
user's likelihood of buying > 0.8 and user's cart's total > 50user's churn risk > 0.6 ? "We miss you! Here's 20% off" : "Check out what's new"These are not CQL expressions you can write today. But we are preparing the ground for it. AI models running in the background, continuously updating predictions. Those predictions surfaced as properties in the CQL context, queryable like any other attribute. When the model re-scores a user, only the queries that depend on that prediction re-evaluate. The UI updates. No polling. No manual wiring. The same signal tracing that handles cart changes would handle prediction changes, with the same precision.
Write a question in plain English. Get a type-checked, dependency-traced, reactively-updated answer. Whether that answer comes from a database field, a cart total, or a machine learning model is an implementation detail. The query language, and the analyzer behind it, treats them all the same.
The CQL static analyzer is just the beginning.