Engineering Creed
Why CloseVector Was Built the Way It Was Built
Published by CloseVector, Legal AI Infrastructure
Published by Dean Hoffman, CloseVector. Infrastructure and risk management perspective. Not an attorney. Not legal advice.
CloseVector was not built to earn an AI vendor's trust. It was built to preserve a firm's custody on behalf of the firm's clients.
The ABA Model Rules describe the compliance surface a law firm has to operate against. Rule 1.6 treats "information relating to the representation" as confidential, not only the glamorous parts, not only the life-or-death parts, all of it. (ABA Rule 1.6) "Informed consent" is not a checkbox. It requires the lawyer to communicate adequate information and explanation about the material risks and reasonably available alternatives. (ABA Rule 1.0(e)) And when a firm uses outside services, including internet-based services, Rule 5.3 places on the lawyer the responsibility for ensuring those services are compatible with the lawyer's professional obligations. (ABA Rule 5.3 Comment 3)
Those rules describe what a firm has to be able to do. CloseVector was designed to let a firm actually do it, with objective artifacts rather than vendor assurances.
Consider the reality of routing client facts, client documents, or privileged strategy into a cloud AI system. That content crosses a boundary the firm does not control and cannot independently audit. The firm cannot see the full handling path. The firm cannot control the environment. The firm cannot independently verify what system ran, where it ran, who could access it, what was retained, or what changed over time. The firm can ask. The firm can receive assurances. The firm still cannot produce an end-to-end chain of custody for the processing and access events inside the vendor boundary.
If a firm cannot enumerate the risk paths with specificity, the client cannot evaluate them. If the client cannot evaluate them, the consent is not informed. It becomes paperwork pretending to be ethics. ABA Formal Opinion 512 makes this stricter, not looser. It states that boilerplate engagement-letter language is not sufficient and it expects lawyers to understand and communicate risks tied to how the tool handles client information. (ABA Formal Opinion 512)
The harm, when it occurs, is irreversible. Once privileged strategy or sensitive facts escape a firm's custody, no one can unsee them. No one can unshare them. No one can rewind who received them. Litigation after the fact is damage control; it is not prevention. In high-stakes matters, that irreversibility is the whole game. Privilege does not run on hope. It runs on custody, control, and proof.
Even setting consent aside, custody itself is structurally broken in a cloud-hosted AI model.
Industry custom is not the standard of care. In T.J. Hooper, Judge Learned Hand observed plainly that an entire industry can lag behind reasonable prudence. (60 F.2d 737 (2d Cir. 1932)) "Everyone uses it" is not a defense when safer options were available and convenience was preferred.
CloseVector was not built to "balance" duties against vendor convenience. It was not built so that procurement checklists could launder the core issue. "No training" contract language, SOC reports, and vendor dashboards do not alter the structural fact that a remote vendor boundary is opaque to the firm and mutable over time. A product designed for firms that accept this framing had to sit on the firm's side of the boundary, not the vendor's.
The question every firm should be able to answer.
The question is this: can a firm prove, with objective artifacts, what happens to client data inside a given system? Where does it go? Who can access it? How long does it persist? How would the firm detect and respond to exposure? If those facts cannot be proven, the engineering question becomes whether local custody is the more defensible default. Today, for cloud AI systems, those facts cannot be proven in a way that is independent, complete, and stable over time. That observation is the engineering premise behind CloseVector.
CloseVector is air-gapped, on-premises AI infrastructure. Storage and compute sit inside the firm. The audit surface belongs to the firm and can be verified without vendor cooperation and without vendor visibility into internal handling. Cloud is reserved for work that is genuinely non-confidential, and even then it is treated as a known exposure rather than a safe default. Local custody is not perfect, but its audit surface can be controlled and verified independently.
Once routine exceptions accumulate, irreversibility turns into a probability argument, and probability arguments are how firms get trapped. No one buys insurance only after proof of a crash. No one should route a client's most sensitive material through a third party and treat the low-probability scenario as acceptable. In legal work, the tail risk is the risk.
CloseVector was built for firms that want to use AI aggressively and keep custody aggressively. The standard described here is not aspirational. It is the minimum level of proof the system was designed to produce before a firm exposes a client to irreversible disclosure risk.