Experiments and Causal Inference

A graduate seminar on experimental and quasi-experimental designs for causal inference.
ISS5096 Spring 2026 TSMC Bldg. R406 Thursday, 14:20–17:20

Course Overview

This course introduces experimental and quasi-experimental methods for causal inference that are widely used in a broad array of domains such as marketing and information systems. The focus is on delivering a breadth of substantive topics and methodological considerations that emerge in utilizing identification-oriented methods.

Throughout the course, we will discuss topics related to methods such as randomized controlled trials (RCT), difference-in-differences (DiD), matching methods such as propensity score matching (PSM) and coarsened exact matching (CEM), and more advanced topics such as regression discontinuity designs (RDD), double-debiased machine learning (DML), synthetic control methods (SCM), and synthetic difference-in-differences.

Students will review relevant research papers on each topic and actively engage in presentations and discussions about the nature of causation and alternative means of inferring causal relationships. Students will also carry out a collaborative group project where they design an experiment and associated plan of analysis to draw business insights.

Learning Objectives

  • Determine which methods and results best support specific empirical inference questions.
  • Gain familiarity with causal inference methods widely used for business analytics.
  • Understand the trade-offs in the design, analysis, and reporting of field, quasi, and natural experiment methods.

Prerequisites

  • Math: Undergraduate-level probability and statistics; some experience with regression analysis/econometrics is helpful. Basic linear algebra is helpful for following textbook derivations.
  • Programming: Knowledge of statistical programming (e.g., R and Python)
  • Recommended prior coursework (ISS):
    • ISS5066 Programming for Business Analytics (PBA) covers statistical inference, R programming, and an introduction to causal inference including potential outcomes and difference-in-differences. All of these are directly relevant to ECI. ECI goes considerably deeper, with more formal notation and careful assumption-based reasoning. Students entering with PBA will find the foundational concepts familiar, but should expect to engage more rigorously with the underlying theory. Qualified undergraduates and graduate students from other departments are welcome to join the class.
    • ISS5077 Computational Statistics for Data Science (CSDS) covers hypothesis testing, regression, and applied statistical modeling and will provide a strong background for the statistical foundations covered in this course.
    • ISS5090 Business Analytics with Machine Learning (BAML) covers supervised and unsupervised learning methods including trees, random forests, and neural networks and will provide helpful background for the causal ML topics covered in Weeks 14–15.
  • Related coursework in other departments:
    • ECON3033 Econometrics I and ECON3034 Econometrics II: regression methodology and empirical research designs.
    • ECON5099 Causal Inference in Econometrics covers related topics.

Teaching Team

Instructor

Jaewon Yoo

Jaewon Yoo · Assistant Professor

Institute of Service Science, College of Technology Management
National Tsing Hua University

Teaching Assistants

Kai-Chieh (Justin) Kao

Kai-Chieh (Justin) Kao

Alumnus, ISS · TSMC

justin.kao [at] iss.nthu.edu.tw

Ting-Wen (Keri) Liu

Ting-Wen (Keri) Liu

M.S. Student, ISS, CADI Lab

keri.liu [at] iss.nthu.edu.tw

Course Platforms

This course uses three platforms: this website, MS Teams, and Canvas:

Purpose Platform
Syllabus, schedule, readings, slides This website
Announcements & Q&A Teams → General channel
Shared papers & resources Teams → Files
Office hours & consultations Teams → DM + Google Calendar
Homework submission Canvas
Grades Canvas

Course Details

  • Feb 23 – Jun 15, 2026
  • Thursday
  • 14:20–17:20
  • TSMC Bldg. R406

Communication

All course communication goes through MS Teams. Homework submissions and grades are on Canvas. For private matters, email or DM the instructor.

What's New

I update the ECI GitHub Page regularly throughout the semester. Items below have been recently revised and need your attention.

  • Paper presentations now follow a hybrid format: each student presents one paper live in class and one as a recorded video, with a short in-class round-table afterwards (see Evaluation → Paper Introduction Presentations).
  • AI Use Policy: useful guidance on using AI in academic writing. Practical insights added: risks to watch (cognitive offloading, voice homogenization) and practices that help (editor-not-rewriter mode, self-check question), with references to leading guides. Worth reading carefully before using AI tools on coursework (see Policies → AI Use Policy).
  • Lecture materials are being uploaded as the course progresses; check the Materials tab regularly for the latest slides, handouts, and written notes.

Topics Covered

RCT

Design and analysis of randomized field experiments.

Difference-in-Differences

Causal inference with longitudinal data and staggered adoption.

Matching Methods

Propensity Score (PSM) and Coarsened Exact Matching (CEM).

RDD

Sharp and Fuzzy regression discontinuity designs.

Causal ML

Double-debiased ML (DML) and Heterogeneous Treatment Effects (HTE) with Meta-learners.

DAGs & Mechanisms

Graphical models for identification and exploring causal mechanisms.

Video Lectures

Lecture 7: Noncompliance and Instrumental Variables

Weekly Schedule

Please refer to the Syllabus PDF for the most up-to-date schedule and full reading lists. Click the purple Slides badges to view lecture slides. Readings should be completed before class. Items marked Due must be submitted via Canvas before that week’s class begins (Thu 14:20).

Part I: Foundations of Causal Inference

Week 1 (Feb 26): Introduction and Potential Outcomes Slides

  • Neyman-Rubin causal model
  • Fundamental problem of causal inference (FPOCI)
  • Causal estimands; Post-treatment bias under truncation by death
  • Readings: Imbens & Rubin Ch. 1; Angrist & Pischke Ch. 1; Holland (1986)
Group forming & ice breaking activities

Week 2 (Mar 5): Randomization Inference Slides

  • Randomized experiments
  • Fisher’s approach to inference, permutation tests
  • Sharp null, randomization distribution
  • Readings: Imbens & Rubin Ch. 5 (skim Ch. 4); Rosenbaum (2002) Ch. 2
Find a collaborator for the group project
Due: The Effect CH 2: Research Questions

Week 3 (Mar 12): Inference for the Average Treatment Effect Slides

  • Neyman’s approach to inference for the ATE
  • Finite-sample vs superpopulation inference
  • Readings: Imbens & Rubin Ch. 6, 9, 10; Angrist & Pischke Ch. 2
Due: Problem Set 1 (Potential Outcomes)
Due: The Effect CH 3: Describing Variables

Week 4 (Mar 19): Linear Regression and Randomized Experiments Slides

  • Simple linear regression in experiments
  • Covariate adjustment in experiments with regression
  • Readings: Imbens & Rubin Ch. 7, 9, 10; Lin (2013); Freedman (2008)
Paper Presentations: 1.1 and 1.2
Due: Problem Set 2 (Randomization Inference)
Due: The Effect CH 4: Describing Relationships

Part II: Observational Studies

Week 5 (Mar 26): Individual/Group Project Meetings

  • Schedule a meeting during office hours to discuss and materialize your project ideas.
Due: Problem Set 3 (Inference for the ATE)
Due: Project Proposal (half-page description)
Due: The Effect CH 13: Regression

Week 6 (Apr 2): School Holiday (no class)

Week 7 (Apr 9): Observational Studies I Slides

  • Selection on observables
  • Regression for observational data
  • Readings: Angrist & Pischke Ch. 3
Paper Presentations: 1.3 and 1.4

Week 8 (Apr 16): DAGs and Covariate Selection Slides

  • DAGs and the back-door criterion
  • Partial identification
  • Readings: Imbens & Rubin Ch. 21–22; Morgan & Winship Ch. 4 (handout provided)
Paper Presentations: 2.3 and 2.4
Construct a DAG for your own research project

Part III: Quasi-Experimental Methods

Week 9 (Apr 23): Instrumental Variables I (Noncompliance and IV) Slides

  • Noncompliance and IV in observational studies
  • Readings: Imbens & Rubin Ch. 23–24; Angrist & Pischke Ch. 4
Due: The Effect CH 6 & 7 Exercises

Week 10 (Apr 30): Instrumental Variables II (TSLS) Slides

  • Two-stage least squares
  • Review of IV applications
Paper Presentation: 2.5
Due: The Effect CH 8: Causal Paths and Closing Back Doors

Week 11 (May 7): Panel Data, Fixed Effects, and Difference-in-Differences Slides

  • Fixed effects and first differences
  • Difference-in-differences
  • Readings: Angrist & Pischke Ch. 5; Imai & Kim (2019)
Paper Presentations: 3.1 and 3.2
Due: Problem Set 4 (Observational Studies)
Due: Progress Report
Due: The Effect CH 19: Instrumental Variables

Week 12 (May 14): Matching and Weighting Estimators Slides Slides (b)

  • Propensity scores, matching, and weighting
  • Readings: Imbens & Rubin Ch. 13, 15, 18; Stuart (2010)
Paper Presentations: 4.1, 4.2, 4.3, and 4.4
Due: Problem Set 5 (Instrumental Variables)
Due: The Effect CH 18: Difference-in-Differences

Week 13 (May 21): Regression Discontinuity Designs Slides

  • Sharp RD designs, identification
  • Estimation and bandwidth selection
  • Readings: Angrist & Pischke Ch. 6; Imbens & Lemieux (2008)
Paper Presentations: 5.1 and 5.2
Due: The Effect CH 14: Matching

Week 14 (May 28): Regression Discontinuity Designs Cont. Slides

Paper Presentations: 5.3 and 5.4
Due: Problem Set 6 (Matching and Weighting Estimators)

Part IV: Advanced Topics & Research Workshop

Week 15 (Jun 4): Advanced Topics TENTATIVE

Due: The Effect CH 20: Regression Discontinuity
Due: Final Report

Week 16 (Jun 11): Final Group Project Presentations

  • Each group presents their research project, followed by structured peer feedback and class discussion.

Lecture Slides & Handouts


Written Notes

Detailed notes in article format, covering material discussed in class and beyond.

Week 5: Observational Studies Week 7: Noncompliance and Instrumental Variables

Textbooks

Required (please have these ready before the semester begins):

  1. Angrist, J. D. & Pischke, J.-S. (2009). Mostly Harmless Econometrics: An Empiricist’s Companion. Princeton University Press.
  2. Imbens, G. W. & Rubin, D. B. (2015). Causal Inference for Statistics, Social, and Biomedical Sciences. Cambridge University Press.
  3. Huntington-Klein, N. (2021). The Effect: An Introduction to Research Design and Causality. CRC Press. (Open-source PDF version provided by the author.)

Optional (useful for additional coverage):

Software

We use R (with RStudio) as the primary computing environment. Key packages include:

tidyverse, fixest, did, rdrobust, MatchIt, WeightIt, grf, dagitty, ggdag, modelsummary

Students comfortable with Stata or Python may use those for their research projects, but in-class demonstrations and code examples will be in R.

Grading

Component Weight Description
Research Project 50% Original empirical research paper (max 20 pages). Includes proposal, progress report, presentation, and final paper.
Homework 30%  
    Problem Sets 15% Conceptual questions, analytic problems, simulations, and data analysis.
    The Effect
    Assignments
15% Exercises from The Effect by Huntington-Klein.
Participation (incl. presentations) 10% Active engagement in discussions + paper introduction presentations.
One-Page Summaries 5% Weekly reading summaries, graded complete/incomplete.
Attendance 5% Expected at every session; each absence costs ~1% of final grade.

Research Project

In lieu of midterms and a final exam, students write a short paper applying or extending the causal inference methods learned in class. The paper should be no longer than 20 double-spaced pages and focus on research design, data, methodology, results, and analysis.

Sample Evaluation

To give you a clear sense of expectations and grading criteria, here is a sample final report with instructor evaluation from a previous offering:

Project Milestones

Milestone Due Deliverable
Find a collaborator Week 2 Form a team or obtain permission for individual project
Project proposal Week 5 Half-page description of proposed project & feasible research plan
Progress report Week 11 5-page memo with preliminary results, tables, figures, and analysis
Final project report Week 15 Submit final version of the paper
Final presentation Week 16 In-class group presentation

Structured Peer Feedback

Final presentations are accompanied by structured peer feedback. After each presentation, every student completes a feedback form covering four areas: Research Question, Identification Strategy, Threats & Limitations, and a Constructive Suggestion. Each area includes space for written comments and a 1–5 rating.

Open Printable Feedback Form

Feedback forms are shared with presenters to support their final revisions. The quality and thoughtfulness of your feedback (not the scores you give) contributes to your Participation grade. Presenter grades are determined by the instructor independently.


Homework

You will have two types of homework:

  1. Problem sets: a mix of conceptual questions, analytic problems, computer simulations, and data analysis that closely resemble what we discuss in lectures.
  2. The Effect assignments: exercises from our third textbook.

You are encouraged to work in groups, but you must always write your own solutions including your own computer code. It is hugely beneficial to attempt the problems on your own before working in groups.

Late policy: Late submissions are penalized 1 percentage point of the assignment’s weight per day. For example, an assignment worth 7% of the course grade turned in 3 days late has a maximum attainable score of 4%.


One-Page Summaries, Presentations & Reading Assignments

  • One-Page Summaries: Before each class, every student submits a 1-page summary of the assigned papers. Graded on a complete/incomplete basis: submission is what counts.
  • Paper Introduction Presentations: Students take turns presenting assigned papers throughout the semester. Each student presents two papers: one live in class and one as a recorded video. Presentations should cover the value of the topic and the motivational story, then connect to the methodology we cover in class: where the paper applies the methods, where it stops short or pushes further, and what was particularly interesting about the paper.
    • Recording upload: to MS Teams by Monday of the week the paper is scheduled.
    • Pre-class viewing & evaluation: all students watch the week’s recording(s) and submit the peer evaluation form before Thursday class.
    • In-class roundtable: If time permits after the lecture, peers will discuss the recorded paper(s) in a roundtable format, with the presenter available to clarify points as needed.
    • Grading: both presentation quality (for presenters) and evaluation quality (for evaluators) contribute to the Participation grade.

One-Page Summary Guideline

Choose 5 papers from the assigned reading list and write a one-page summary addressing the following:

  1. Main Question: What is the main question of the paper? Why do we care?
  2. Parameters of Interest: What model aspects or facts are being estimated? What parameter answers the main question?
  3. Data: What is the unit of observation? Where are the data from? How were they collected? Do the data appear reliable?
  4. Answers: What does the paper find? What are the main weaknesses? How could it be improved? If you were a practitioner, what implications would you draw?
  5. Extensions: After the current paper, what could be done next in this general area?

You may also discuss: one-sentence conclusion, institutional background, conceptual framework, or relevant literature. Submit as PDF via Teams.

Reading Assignments by Topic

Students will present papers from the following five topic groups. Each group contains five readings: a textbook chapter plus published applications from top journals. Presentation assignments will be finalized before the first presentation week.

Note: If you have a top-journal paper that uses a method covered in one of these modules (e.g., a DiD paper for the DiD module) and you would prefer to present it instead, please contact the instructor before presentations begin to request a substitution.

1. Field Experiments (Weeks 4–5)

  1. Angrist, J. D. & Pischke, J.-S. (2009). Ch. 2, “The Experimental Ideal.” Mostly Harmless Econometrics. Princeton University Press.
  2. Montaguti, E., Neslin, S. A., & Valentini, S. (2016). “Can Marketing Campaigns Induce Multichannel Buying and More Profitable Customers? A Field Experiment.” Marketing Science, 35(2), 201–217.
  3. Sahni, N. S., Zou, D., & Chintagunta, P. K. (2017). “Do Targeted Discount Offers Serve as Advertising? Evidence from 70 Field Experiments.” Management Science, 63(8), 2688–2705.
  4. Bapna, R., Ramaprasad, J., Shmueli, G., & Umyarov, A. (2016). “One-Way Mirrors in Online Dating: A Randomized Field Experiment.” Management Science, 62(11), 3100–3122.
  5. Cook, T. D., Campbell, D. T., & Shadish, W. R. (2002). Ch. 1, “Experiments and Generalized Causal Inference.” Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Houghton Mifflin.

2. Instrumental Variables (Weeks 8–9)

  1. Angrist, J. D. & Pischke, J.-S. (2009). Ch. 4, “Instrumental Variables in Action.” Mostly Harmless Econometrics. Princeton University Press.
  2. Angrist, J. D., Imbens, G. W., & Rubin, D. B. (1996). “Identification of Causal Effects Using Instrumental Variables.” Journal of the American Statistical Association, 91(434), 444–455.
  3. Zettelmeyer, F., Scott Morton, F., & Silva-Risso, J. (2006). “How the Internet Lowers Prices: Evidence from Matched Survey and Automobile Transaction Data.” Journal of Marketing Research, 43(2), 168–181.
  4. Dewan, S. & Ramaprasad, J. (2012). “Music Blogging, Online Sampling, and the Long Tail.” Information Systems Research, 23(3), 1056–1067.
  5. Barron, K., Kung, E., & Proserpio, D. (2021). “The Effect of Home-Sharing on House Prices and Rents: Evidence from Airbnb.” Marketing Science, 40(1), 23–47.

3. Difference-in-Differences (Week 11)

  1. Angrist, J. D. & Pischke, J.-S. (2009). Ch. 5, “Parallel Worlds: Fixed Effects, Difference-in-Differences, and Panel Data.” Mostly Harmless Econometrics. Princeton University Press.
  2. Bertrand, M., Duflo, E., & Mullainathan, S. (2004). “How Much Should We Trust Differences-in-Differences Estimates?” Quarterly Journal of Economics, 119(1), 249–275.
  3. Dranove, D., Kessler, D., McClellan, M., & Satterthwaite, M. (2003). “Is More Information Better? The Effects of ‘Report Cards’ on Health Care Providers.” Journal of Political Economy, 111(3), 555–588.
  4. Goldfarb, A. & Tucker, C. E. (2014). “Conducting Research with Quasi-Experiments: A Guide for Marketers.” Rotman School of Management Working Paper No. 2420920.
  5. Foerderer, J., Lueker, N., & Heinzl, A. (2021). “And the Winner Is…? The Desirable and Undesirable Effects of Platform Awards.” Information Systems Research, 32(4), 1155–1172.

4. Matching Methods (Week 12)

  1. Gordon, Brett R., et al. (2019). “A Comparison of Approaches to Advertising Measurement: Evidence from Big Field Experiments at Facebook.” Marketing Science, 38(2), 193–225.
  2. Xu, K., Chan, J., Ghose, A., & Han, S. P. (2017). “Battle of the Channels: The Impact of Tablets on Digital Commerce.” Management Science, 63(5), 1469–1492.
  3. Adamopoulos, P., Todri, V., & Ghose, A. (2020). “Demand Effects of the Internet-of-Things Sales Channel: Evidence from Automating the Purchase Process.” Information Systems Research, 32(1), 238–267.
  4. Kim, Jun Hyung, et al. (2021). “Home-Tutoring Services Assisted with Technology: Investigating the Role of Artificial Intelligence Using a Randomized Field Experiment.” Journal of Marketing Research.
  5. Son, Y., Oh, W., Han, S. P., & Park, S. (2020). “When Loyalty Goes Mobile: Effects of Mobile Loyalty Apps on Purchase, Redemption, and Competition.” Information Systems Research, 31(3), 835–847.

5. Regression Discontinuity Design (Weeks 13–14)

  1. Angrist, J. D. & Pischke, J.-S. (2009). Ch. 6, “Getting a Little Jumpy: Regression Discontinuity Designs.” Mostly Harmless Econometrics. Princeton University Press.
  2. Flammer, C. (2015). “Does Corporate Social Responsibility Lead to Superior Financial Performance? A Regression Discontinuity Approach.” Management Science, 61(11), 2549–2568.
  3. Hartmann, W., Nair, H. S., & Narayanan, S. (2011). “Identifying Causal Marketing Mix Effects Using a Regression Discontinuity Design.” Marketing Science, 30(6), 1079–1097.
  4. Jo, Wooyong, et al. (2020). “Protecting Consumers from Themselves: Assessing Consequences of Usage Restriction Laws on Online Game Usage and Spending.” Marketing Science, 39(1), 117–133.
  5. Flammer, C. & Bansal, P. (2017). “Does a Long-Term Orientation Create Value? Evidence from a Regression Discontinuity.” Strategic Management Journal, 38(9), 1827–1847.

Course Policies

Attendance: All students are expected to attend every class. Please bring your own hard copy of the course materials distributed before class. If you must miss a class, inform the instructor or TA in advance via email or phone. You are still responsible for the materials covered. Attendance counts toward your participation score (5%); each absence costs approximately 1% of the final grade.

Participation (10%): This includes both active engagement in class discussions and the quality of your paper introduction presentations. Stay active and engaged. Effective discussions require that everyone comes prepared. Be ready to share your opinions and thoughts.

Late Policy: Late submissions are penalized 1 percentage point of the assignment’s weight per day. For example, an assignment worth 7% of the course grade turned in 3 days late has a maximum attainable score of 4%.

Academic Honesty: All work submitted must be the student’s own. Violations will result in a zero for the first offense; subsequent violations result in a failing grade for the course. Submissions will be checked via Turnitin.

AI Use Policy: Students are permitted to use AI tools (e.g., ChatGPT, Claude) responsibly as editorial assistants and sounding boards to refine, stress-test, or get feedback on their own work, not to generate drafts in their place. The final submission must predominantly reflect the student’s own understanding and reasoning.

A few risks to avoid and practices to adopt when using AI for scholarly writing:

If you use AI tools, you must:

For deeper discussion of these trade-offs, see Paul Goldsmith-Pinkham’s (Yale) Writing and Thinking with AI Assistance. For practical writing principles and discipline-specific norms AI tools often miss (active voice, concrete examples, citation integrity, anti-AI-pattern hygiene), see this open-source GitHub repository by Lu Han, which distills writing guidance from leading social scientists, including Nobel laureates Claudia Goldin and Michael Kremer alongside John Cochrane, Deirdre McCloskey, and Jesse Shapiro. The takeaway I find most compelling: academic writing is not an art but a learnable craft, a set of best practices that students can study and apply directly.

Accommodation: Students with disabilities or special needs should contact the instructor during the first week of class to arrange appropriate accommodations.