Publication Showcase

This is the Publication Showcase. Click here for the Complete Recent Publications List.


Jeronimo Carballo

"," (with Alejandro Graziano, Georg Schaur and Christian Volpe Martincus) Journal of International Economics, 154, 2025.
Abstract: We estimate import processing costs based on the time it takes to import. To do so, we first develop a theoretical model that extends existing time-cost measures to account for uncertainty in import processing. Second, we use detailed, highly disaggregated data on import processing dates and import values to provide estimates of processing costs that are consistent with the theory. This evidence indicates that our extensions to time-cost estimates are economically relevant to determining processing costs. According to our estimates, the tariff equivalent import processing cost is as high as 18 percent. WTO estimates suggest that the full implementation of the 2013 Trade Facilitation Agreement would reduce the time to trade by 1.5 days. In that case, processing costs would decrease to 13 percent.


Yongmin Chen

  • "," (with X. Hua), The Economic Journal, 2026, forthcoming.
    Abstract: We study optimal liability for AI-powered products. Like human users, artificial intelligence (AI) can cause product failures that harm third parties. Additionally, it may introduce extreme risks of large-scale harm that renders full liability impractical. Raising AI liability for ordinary loss above actual harm can decrease excessive autonomy and increase social welfare, even when it negatively impacts R&D efforts. A well-designed liability rule implements efficient levels of autonomy and balanced R&D that reduces AI鈥檚 general risk. However, under targeted R&D to reduce AI鈥檚 extreme risk, full efficiency cannot be achieved with liability, and regulations limiting such risk can perform better.
  • ",鈥 (with C. Cao, Y. Ding, and T. Zhang) RAND Journal of Economics, forthcoming.
    Abstract: We analyze a model where consumers sequentially search experts for treatment recommendations and prices, facing either zero or a positive search cost, whereas experts simultaneously compete in these two dimensions. In equilibrium, experts may 鈥渃heat鈥 by overstating the severity of a consumer's problem and recommending an unnecessary treatment, prices follow distributions depending on the problem type and the treatment, and consumers employ Bayesian belief updating about their problem types during search. Paradoxically, as search cost decreases, expert cheating and prices can both increase stochastically. However, if search cost is sufficiently small, competition will force all experts to behave听honestly.

Murat Iyigun

"," (with Ali Almelhem, Austin Kennedy and Jared Rubin) Quarterly Journal of Economics, Volume 141, Issue 1, pages 263鈥314, February 2026.
Abstract: Using textual analysis of 173,031 works printed in England between 1500 and 1900, we test whether British culture evolved to manifest a heightened belief in progress associated with science and industry. Our analysis yields three main findings. First, there was a separation in the language of science and religion beginning in the 17th century. Second, scientific volumes became more progress-oriented during the Enlightenment. Third, industrial works鈥攅specially those at the science-political economy nexus鈥攚ere more progress-oriented beginning in the 17th century. It was therefore the more pragmatic, industrial works which reflected the cultural values cited as important for Britain鈥檚 takeoff.


Taylor Jaworski

"," (with听Alex Hollingsworth, Carl Kitchens, and Ivan Rudik)听Journal of Political Economy: Microeconomics, Vol. 4, No. 2. May 2026.
Abstract:听We develop a quantitative economic geography model with endogenous emissions, amenities, trade, and labor reallocation to evaluate the spatial impacts of the leading air quality regulation in the United States: the National Ambient Air Quality Standards (NAAQS). We find that the NAAQS generate over $50 billion in annual welfare gains. The gains are spatially concentrated in a small set of cities targeted by the NAAQS, and the improved amenities attract large numbers of nonmanufacturing workers into these areas. Despite the concentration of the benefits in cities, over $10 billion in benefits spill over into counties indirectly affected by the regulation. We use our model to analyze counterfactual policies and find that making the NAAQS more stringent could have increased welfare by another $13 billion annually, and that using emissions pricing could increase welfare by another $70 billion per year.


Xiaodong Liu

"," (with Chih-Sheng Hsieh, Michael K缨nig, and Christian Zimmermann) conditionally accepted, AEJ: Microeconomics, 2025.

Abstract: This paper studies the impact of collaboration on research output. First, we build a micro-founded model for scientific knowledge production, where collaboration between researchers is represented by a bipartite network. The Nash equilibrium of the game incorporates both the complementarity effect between collaborating researchers and the substitutability effect between concurrent projects of the same researcher. Next, we propose a Bayesian MCMC procedure to estimate the structural parameters, taking into account the endogenous participation of researchers in projects. Finally, we illustrate the empirical relevance of the model by analyzing the coauthorship network of economists registered in the RePEc Author Service. The estimated complementarity and substitutability effects are both positive and significant when the endogenous matching between researchers and projects is controlled for, and are downward biased otherwise. To show the importance of correctly estimating the structural model in policy evaluation, we conduct a counterfactual analysis of research incentives. We find that the effectiveness of research incentives tends to be understated when the complementarity effect is ignored and overstated when the substitutability effect is ignored.

"," (with Chih-Sheng Hsieh and Michael K缨nig) The RAND Journal of Economics, 2025.
Abstract: We introduce an R&D network formation model where firms choose both R&D efforts and collaboration partners. Neighbors in the network benefit from each other's R&D through technology spillovers, and there exists competition effects reflecting strategic substitutability in R&D. We provide a complete equilibrium characterization, and develop an estimation method that is computationally feasible for large networks. We then conduct an analysis of R&D collaboration subsidies, and find that a subsidy scheme targeting specific R&D collaborations can be more effective than a uniform subsidy, with a welfare gain up to five times larger than the cost of the听subsidy.

"," (with Hyunseok Jung) Journal of Econometrics听253, 106124, 2026.
Abstract: This paper proposes an Anderson-Rubin (AR) test for the presence of peer effectsin panel data without the need to specify the network structure. The unrestrictedmodel of our test is a linear panel data model of social interactions with dyad-specificpeer effect coefficients for all potential peers. The proposed AR test evaluates if thesepeer effect coefficients are all zero. As the number of peer effect coefficients increaseswith the sample size, so does the number of instrumental variables (IVs) employedto test the restrictions under the null, rendering a many-IV environment ofBekker(1994). By extending existing many-IV asymptotic results to panel data, we establishthe asymptotic validity of the proposed AR test. Our Monte Carlo simulations showthe robustness and improved performance of the proposed test compared to some ex-isting tests with misspecified networks. We provide two applications to demonstrateits empirical relevance.

鈥 (with Ingmar R. Prucha) Journal of Econometrics 247, 105925, 2025.
Abstract: The paper introduces robust generalized Moran tests for network-generated cross-sectional dependence in a panel data setting where unit-specific effects can be random or fixed. Network dependence may originate from endogenous variables, exogenous variables, and/or disturbances, and the network dependence is allowed to vary over time. The formulation of the test statistics also aims at accommodating situations where the researcher is unsure about the exact nature of the network. Unit-specific effects are eliminated using the Helmert transformation, which is well known to yield time-orthogonality for linear forms of transformed disturbances. Given the specification of our test statistics, these orthogonality properties also extend to the quadratic forms that underlie our test statistics. This greatly simplifies the expressions for the asymptotic variances of our test statistics and their estimation. Monte Carlo simulations suggest that the generalized Moran tests introduced in this paper have the proper size and can provide substantial improvement in robustness when the researcher faces uncertainty about the specification of the network topology.


Richard Mansfield

","听The Economic Journal, forthcoming.
Abstract: This paper examines how spatial frictions that differ among heterogeneous workers and establishments shape the geographic and demographic incidence of alternative local labour demand shocks, with implications for the appropriate level of government at which to fund local economic initiatives. Longitudinal Employer-Household Dynamics data featuring millions of job transitions facilitate estimation of a rich two-sided labour market assignment model. The model generates simulated forecasts of many alternative local demand shocks featuring different establishment compositions and local areas. Workers within 10 miles receive only 11.2% (6.6%) of nationwide welfare (employment) short-run gains, with at least 35.9% (62.0%) accruing to out-of-state workers, despite much larger per-worker impacts for the closest workers. Local incidence by demographic category is very sensitive to shock composition, but different shocks produce similar demographic incidence further from the shock. Furthermore, the remaining heterogeneity in incidence at the state or national level can reverse patterns of heterogeneous demographic impacts at the local level. Overall, the results suggest that reduced-form approaches using distant locations as controls can produce accurate estimates of local shock impacts on local workers, but that the distribution of local impacts badly approximates shocks鈥 statewide or national incidence.


Adam McCloskey

","听(with Pascal Michaillat)听Review of Economics and Statistics, forthcoming.
Abstract:听P-hacking is prevalent in reality but absent from classical hypothesis testing theory. As a consequence, significant results are much more common than they are supposed to be when the null hypothesis is in fact true. In this paper, we build a model of hypothesis testing with p-hacking. From the model, we construct critical values such that, if the values are used to determine significance, and if scientists鈥 p-hacking behavior adjusts to the new significance standards, significant results occur with the desired frequency. Such robust critical values allow for p-hacking so they are larger than classical critical values. To illustrate the amount of correction that p-hacking might require, we calibrate the model using evidence from the medical sciences. In the calibrated model the robust critical value for any test statistic is the classical critical value for the same test statistic with one fifth of the significance level.

","听(with Philipp Ketz)听Review of Economics and Statistics,听forthcoming.
Abstract:听 We introduce adaptive confidence intervals on a parameter of interest in the presence of nuisance parameters, such as coefficients on control variables, with known signs. Our confidence intervals are trivial to compute and can provide significant length reductions relative to standard ones when the nuisance parameters are small. At the same time, they entail minimal length increases at any parameter values. We apply our confidence intervals to the linear regression model, prove their uniform validity and illustrate their length properties in an empirical application to a factorial design field experiment and a Monte Carlo study calibrated to the empirical application.


Sergey Nigai

","听American Economic Journal: Economic Policy, forthcoming.
Abstract:I examine the international transmission of income inequality through trade. Using firm-level and aggregate data, I find that exporting to more unequal countries increases domestic inequality. I rationalize this finding by developing a model of international consumer targeting in which firms serve specific consumer segments in each market. Inequality in export markets shapes the distribution of firms' profits and, therefore, the incomes of individuals linked to them, widening domestic inequality. The calibrated model suggests that the international inequality transmission explains 4.4% and 4.8% of the observed levels of the Gini coefficients and income shares of the top 1%, respectively.


Alessandro Peri

,鈥 (Fabrizio Colella and Keith E. Maskus) Economic Journal, conditionally accepted, 2025.
Abstract: Tighter money-laundering regulations in offshore financial havens may inadvertently spur incentives to launder money domestically. Our study exploits regulations targeting financially based money laundering in Caribbean jurisdictions to uncover the creation of front companies in the United States. We find that counties exposed via offshore financial links to these jurisdictions experienced an increase in business activities after the tightening of anti-money-laundering regulations. The effect is more pronounced among small firms, in sectors at high risk of money laundering, and in regions with high intensities of drug trafficking. Our work provides the first empirical evidence of the real effects of policy-induced money-laundering leakage.

,"听(with Cheela, Bhagath, et al.) Quantitative Economics, vol. 16, .no 1, Econometric Society, 2025, pp. 49-87.
Abstract: We show how to use field鈥恜rogrammable gate arrays (FPGAs) and their associated high鈥恖evel synthesis (HLS) compilers to solve heterogeneous agent models with incomplete markets and aggregate uncertainty (Krusell and Smith (1998)). We document that the acceleration delivered by one single FPGA is comparable to that provided by using 69 CPU cores in a conventional cluster. The time to solve 1200 versions of the model drops from 8 hours to 7 minutes, illustrating a great potential for structural estimation. We describe how to achieve multiple acceleration opportunities鈥攑ipeline, data鈥恖evel parallelism, and data precision鈥攚ith minimal modification of the C/C++ code written for a traditional sequential processor, which we then deploy on FPGAs easily available at Amazon Web Services. We quantify the speedup and cost of these accelerations. Our paper is the first step toward a new field, electrical engineering economics, focused on designing computational accelerators for economics to tackle challenging quantitative models. Replication code is available on Github.

","听(with Omar Rachedi and Iacopo Varotto)听The Review of Economics and Statistics, forthcoming.
Abstract:听Aggregate and sectoral effects of public investment crucially depend on the inter- action between the output elasticity to public capital and intermediate inputs. We uncover this fact through the lens of a New Keynesian production network. This setting doubles the socially optimal amount of public capital relative to the one-sector model without intermediate inputs, leading to a substantial am- plification of the public-investment multiplier. We also document novel sectoral implications of public investment. Although public investment is concentrated in far fewer sectors than public consumption, its effects are relatively more evenly distributed across industries. We validate this model implication in the data.