Bespoke empirical analysis on any TradeWeave-adjacent question
Custom research covers the engagements that do not fit a pre-packaged offering: replications extended to an updated sample, new tariff microsimulations, proprietary-dataset ingestions, difference-in-differences on a specific policy event, custom scorecards, and ad-hoc empirical questions that sit anywhere in the TradeWeave surface. The common thread is published methods, clean primary data, and an agreed output format that the client can actually use.
Problem
A client arrives with a question that is clearly empirical, clearly trade-adjacent, and clearly not answered by any of the pre-packaged offerings. Sometimes the question is an extension: a replication published five years ago needs its sample refreshed through the most recent BACI release, with the same specification and the same inferential care. Sometimes the question is a new simulation: the client needs a partial-equilibrium tariff microsimulation on a specific HS chapter under three alternative policy paths, built to their preferred elasticity vector and their own welfare assumptions. Sometimes it is an ingestion: a proprietary shipment-level dataset needs to be cleaned, joined to BACI, and validated against public aggregates before it can be used for anything else. Sometimes it is a policy evaluation: a trade measure was announced on a known date, and the client wants a difference-in-differences estimate of its effect on bilateral flows at HS6, with the robustness checks a referee would demand.
What every version of the brief has in common is that the deliverable has to hold up to the same standards the workbench holds its public pages to: primary data, cited methods, verified numbers, and no mock values ever. The scoping conversation at the start of the engagement is what aligns the brief with those standards and with the client's own use case.
Typical requests
The distribution of past custom engagements clusters into five patterns. Replication extensions, in which an established paper is re-estimated on an updated sample with the same specification and any modifications the extra years require. New tariff microsimulations, in which a partial-equilibrium model is built to the client's specification, usually with the Armington structure and elasticity vector the workbench already maintains, and run over a policy space the client defines. Proprietary-dataset ingestions, in which client-supplied shipment, customs, firm or satellite data is cleaned, documented, and joined to the open panels. Policy-event evaluations, usually a difference-in-differences or synthetic-control design applied to a specific trade measure with a known implementation date, producing a causal estimate and the placebo and robustness battery that a referee would ask for. And custom scorecards, in which the client has a portfolio of countries, products, corridors or counterparties and wants a ranked and documented view against a stated criteria set.
Anything else in the TradeWeave-adjacent neighborhood is in scope: new interactive pages, new Parquet ingestions for the open workbench, hand-off code that plugs into the client's stack, or a one-off memo on a structural question that the public pages do not yet cover.
Data
The default data foundation is the TradeWeave Parquet bank: CEPII BACI at HS6 for bilateral goods flows, CEPII Gravity for the covariates, World Bank World Development Indicators for macro context, FAOSTAT for agricultural and fertiliser trade, CEPII TRADHIST for long-horizon historical flows, CEPII CHELEM for national accounts and trade aggregates, Worldsteel and IEA critical-minerals panels where materials-sector work is in scope, IMF for macro and price panels, and the workbench's own live vessel-AIS and cargo-flight feeds where the question reaches real-time logistics. Client-supplied data is ingested to the same Parquet conventions (ZSTD-compressed, ISO3 uppercase, HS6 six-character text with leading zeros preserved, trade values in their source units with a documented conversion for display) and joined alongside the open tables.
Where the question needs a data source that is not yet in the bank, the first sprint of the engagement is an ingestion: the source is downloaded from its primary provider, cleaned to the workbench conventions, documented with a data card, and written to Parquet. The ingestion script and the data card are delivered with the rest of the engagement artifacts.
Method sketch
Method choice follows the question. For replication extensions, the specification is held fixed from the original paper wherever possible and any necessary deviation is flagged explicitly in the deliverable. For microsimulations, the standard toolkit is partial- equilibrium Armington with HS6 elasticities from Kee-Nicita-Olarreaga (2008, Review of Economics and Statistics), with gravity-fit bilateral shares and a documented assumption on pass-through. For policy-event evaluations, the default is a PPML difference-in-differences with cluster-robust inference, with synthetic-control and event-study specifications run as robustness. For custom scorecards, the scoring function and its inputs are documented in full and delivered as part of the output so the client can re-weight later.
Where the brief needs a method that is not in the standard kit, the method is chosen by reference to the published literature, cited explicitly, and implemented from the primary source. The TradeWeave workbench already maintains a replications catalogue covering the most-cited trade papers of the past two decades, which is the first reference the custom-research team consults when scoping a new brief.
Deliverable
The deliverable is agreed up front and sized to the brief. A typical package includes a written research memo of variable length, the Parquet files that underlie every number in the memo, a Jupyter notebook or SQL scripts that reproduce the analysis from the Parquet files, and hand-off code where the engagement includes an ingestion or a custom simulation. For engagements that yield a new interactive analysis, the deliverable can include a workbench page built in the same conventions as the public pages, either for the client's internal use or, at the client's option, for public release.
Every deliverable cites its methods and its data sources in line with the public-workbench convention. No number ships without a traceable provenance to a primary source, and no computed value ships without a check against an independent benchmark where one exists.
Related workbench pages
Custom engagements build on the same infrastructure and methodology catalogue that the public pages use. The live links below are where most custom briefs start, either because they define the method (replications, research, methodology) or because they define the data surface (data, sql) the work will draw on.
- Replications catalogue across the trade literature
- Research notes and working drafts
- Methodology register and citation map
- Data bank and download index
- SQL console over the open Parquet bank
Timeline
The standard engagement runs four to twelve weeks depending on scope. A replication extension closes in four to six weeks once the sample and specification are agreed. A full tariff microsimulation on a new HS chapter or a new policy space runs six to eight weeks. A clean policy-event evaluation with the full robustness battery runs six to eight weeks. A proprietary-dataset ingestion joined to a substantive analysis on top takes eight to twelve weeks, with the first two to three weeks dedicated to the ingestion and the remainder to the analysis. Larger or multi-track engagements are scoped directly with the client.
To scope a custom engagement, contact the workbench with a one-paragraph description of the question and the preferred output format.