All posts by Brendan Buff

APDU Workshop Series: Making the Best of the 2020 Census

Virtual Workshop

Town Halls: April 14 and May 12, 2021

Trainings: June 16, August 18, and September 15, 2021

Office Hours: Biweekly beginning June 9, 2021

Price: Free

Register Here

Accurate statistics about 2020 will rely on much more than the decennial census data collection. Developing reliable data will require an understanding of challenges resulting from the pandemic, combined with greater use of non-traditional sources like administrative records. The solutions to these problems will impact how data is gathered going forward for a variety of purposes: education, housing, economic development, public health, and more.

Register today for this series of town hall events and trainings. During this workshop series you will learn more about the quality of the data that state and local leaders rely on and how you can improve and supplement it.

Town Hall #1: April 14, 2021 (3:00 – 4:00 PM ET)
2020 Census was “Different” – A Rundown of Issues

Facilitators:

  • Amy O’Hara, Research Professor, Massive Data Institute, Georgetown University
  • danah boyd, Principal Investigator, Microsoft Research & Founder, Data & Society

With the COVID-19 pandemic, political interference, and disclosure avoidance concerns, this census was deeply impacted. Amy and danah will discuss what happened with the census, where we are now, what researchers are hearing from the Census Bureau, the updated timeline, and what the Census Bureau can still fix.

Town Hall #2: May 12, 2021 (3:00 – 4:30 PM ET)
Solving Data “Differences” – Assessing the Use Cases

In this town hall, we will solicit your concerns and questions about upcoming census products – specifically about urban/rural, housing, workforce, health, and justice use cases. We will discuss data sources and methods for these different use cases. Since 2020 census products are delayed, we will discuss alternative data sources that may support population measurement.

Training #1: June 16, 2021 (1:00 – 3:30 PM ET)
Addressing the Census – Why Address Data is Crucial and How to Use It

In the first of a series of trainings focused on preparing data users to use the 2020 census data, we will begin by familiarizing the group with types of address data to lead to a high-quality census enumeration, help to validate the census publications that come out, and potentially how to mount a Count Question Resolution challenge. In this session, we will review coverage and classification issues, how to evaluate data sources and tools to assess your data.

Training #2: August 18, 2021 (1:00 – 3:30 PM ET)
Age Bins – Where to Find More Data

In our second training, we will discuss the importance of obtaining accurate data on different age categories. The Census Bureau has released demonstration data on their disclosure avoidance system; however, age bins have not been a component. Accurate age bins are critical for urban planning, public health, social research, and funding, and we know that the census has traditionally undercounted very young children and overcounted the elderly. We will discuss how possible imprecision in published census results may affect the age distribution and consider how age bins can be smoothed. We will also explore other datasets that can be used to understand key population subgroups.

Training #3: September 15, 2021 (1:00 – 3:30 PM ET)
Beyond COVID – Identifying Public Health Data to Prevent Disaster

Whether it’s a global pandemic or an overdose crisis in your community, we want to empower you with the tools and resources to identify patterns and be prepared to respond. This training will go over the new administration’s Executive Order, which datasets can drive insights around health, highlighting differences between statistical and tactical data. We will also discuss measuring migration and service utilization. With these tools, we are hoping to prepare our attendees to identify the best data and methods to deal with future public health crises or natural disasters.

2021 APDU Conference Call for Proposals

Public Data: Making Sense of the New Normal

APDU is welcoming proposals on “making sense of the new normal” using public data. With economic, public health, and governance challenges arising from COVID-19 and political polarization, trustworthy public data is vital to open and honest policy debates. APDU is interested in proposals regarding:

  • Novel uses of public data to understand the shifting American landscape;
  • Ways that researchers and advocates are ensuring that public data is accurate and equitable;
  • How public data can help restore trust in institutions;
  • How to rebuild trust in public data; or
  • Other related and relevant topics.

Proposals can be for a single presentation or panel, whether based on a particular project, data practice, or formal paper. You may submit ideas for a single presentation or a full panel (three presenters, plus a moderator). However, it is possible that we will accept portions of panel submissions to combine with other presenters. Submissions will be evaluated on the quality of work, relevance to APDU Conference attendees, uniqueness of topic and presenter, and thematic fit.

EXTENDED Deadline: March 26, 2021

Please submit your proposal using the Survey Monkey collection window below.  Proposals will need to be submitted by members of APDU, and all presenters in a panel must register for the conference (full conference registration comes with a free APDU membership).  Proposers will be notified of our decision by mid-April.

About APDU

The Association of Public Data Users (APDU) is a national network that links users, producers, and disseminators of government statistical data. APDU members share a vital concern about the collection, dissemination, preservation, and interpretation of public data.  The conference will be held virtually on July 26-29, 2021, and brings together data users and data producers for conversations and presentations on a wide variety of data and statistical topics.

Create your own user feedback survey

2020 APDU Candidate Statements

Candidate for President: Mary Jo Hoeksema

Since January 2004, Mary Jo Hoeksema has been the Director of Government Affairs for the Population Association of America and Association of Population Centers. In addition to representing PAA and APC, Ms. Hoeksema has co-directed The Census Project since 2008.  Prior to her position with PAA/APC, Ms. Hoeksema worked at the National Institutes of Health for approximately 10 years, as the Legislative Officer at the National Institute on Aging and as the Special Assistant to the Director of the NIH Office of Policy of Extramural Research Administration.  Ms. Hoeksema served as a Legislative Assistant for Congresswoman Rosa DeLauro and Legislative Correspondent for U.S. Senator Jeff Bingaman.  Ms. Hoeksema moved to Washington, DC from her home state of New Mexico to work at the Council for a Livable World as a 1990 Scoville Fellow.

Ms. Hoeksema has a Master of Public Administration from the George Washington University and is a former Presidential Management Fellow. She also has a bachelor’s degree in political science and history from the University of New Mexico.

Candidate Statement

I was introduced to APDU shortly after arriving at the Population Association of America (PAA). I was immediately drawn to the organization given its mission and the fellowship that I found with its members. I discovered that the annual meeting was a unique opportunity to meet data users outside of academia–especially those from federal, state, and local governments–and learn firsthand what issues were affecting their access to timely and accurate data.

I have served on the APDU board, as a member and previously as Vice President, for approximately four years. During this time, I’ve been involved in several initiatives, including revising the organization’s strategic plan, advising APDU’s advocacy agenda, and co-chairing the annual meeting. These experiences, combined with my frequent interactions with APDU members, has given me insight into the organization’s strengths and challenges. If elected president, I would build upon the work APDU has initiated to:

  • Increase APDU’s membership, particularly among young professionals entering the field;
  • Enhance the organization’s visibility inside and outside of the data user community;
  • Improve APDU’s education and training opportunities;
  • Strengthen communication with APUD members; and,
  • Seek opportunities to collaborate with similar organizations to advance the interests of the diverse data users APDU represents.

If elected president I will always be open to hearing ideas and discussing issues with members.

Candidate for Vice President: Amy O’Hara, Research Professor, Georgetown University

Amy O’Hara is a Research Professor in the Massive Data Institute and Executive Director of the Federal Statistical Research Data Center at the McCourt School for Public Policy. She also leads the Administrative Data Research Initiative, improving secure, responsible data access for research and evaluation. Her research focuses on population measurement, data quality, and record linkage. O’Hara has published on topics including the measurement of income, longitudinal linkages to measure economic mobility, and the data infrastructure necessary to support government and academic research.

Prior to joining Georgetown, O’Hara was a senior executive at the U.S. Census Bureau where she founded their administrative data curation and research unit. She received her Ph.D. in Economics from the University of Notre Dame.

Candidate Statement

Last year, I wanted to serve on the APDU board to improve data access and quality for members, researchers, and program administrators. This year has revealed the cracks in our measurement infrastructure and the dire need to explain and inform our decision makers.  2020 has been rough on everyone, but especially on institutions like CDC and the Census Bureau.  The impact of the pandemic continues to evolve in state and local governments, who face rising infection rates, battered economies, volatile budgets, and a great deal of uncertainty.  Data will not solve these problems, but none of these problems can be solved without data.

APDU can, and must, foster coordination between federal, state, and local data producers and data users.  For ADPU, I will work towards establishing standards and norms for secure and responsible data use.  Our community needs to incorporate broader views of where data comes from and what it is needed for; emphasize data utility when designing privacy protections; and increase social license.

Candidate At-Large Director: Bernie Langer, Senior Data Analyst, Center for Court Innovation

Bernie Langer’s expertise in public data comes from his previous work at PolicyMap. Mr. Langer has a deep and broad knowledge about federal statistical agencies and private data providers, as well as experience working with data and data users to solve problems. He worked with data from the Census Bureau, BLS, IRS, SSA, HUD, USDA, FDIC, FBI, FCC, FEMA, DOT, NCES, EPA, SBA, and CDC, just to name a few. Mr. Langer also led PolicyMap’s “Mapchats” webinar series, a forum for data providers and users to discuss their work.

Mr. Langer’s current work at the Center for Court Innovation deals with a very different type of data, regarding New York City’s criminal justice system. In his role as a senior data analyst, Mr. Langer works with the organization’s Supervised Release Program, a pre-trial alternative to bail.

Candidate Statement

I am excited to continue serving on the APDU Board of Directors. In my last term, I served on the conference committee, which put together APDU’s first ever virtual conference. The conference was a success, virtually bringing together people working in data from across the country at a crucial point during the 2020 Census and Covid crisis.

I find APDU’s conferences, webinars, and newsletters invaluable. As a board member, I would continue my commitment to maintaining the high quality of APDU’s services and events, finding additional ways for data providers and users to interact, and raising the profile of public data in society.

Candidate for At-Large Director: Michelle Riordan-Nold, Executive Director, Connecticut Data Collaborative

Michelle Riordan-Nold has served as Executive Director of the Connecticut Data Collaborative (CTData) since 2014. In her current role, Ms. Riordan-Nold leads CTData, whose mission seeks to democratize access to public data and build data literacy skills to increase data informed decision making in Connecticut. CTData is also the designated Census State Data Center for Connecticut. In addition, the organization holds monthly public data literacy workshops; creates maps and other visualization tools for community organizations to access and use data; and is building an integrated data system in Hartford. In 2020, the organization was the winner of the CT Entrepreneurial Award in Education.

Prior to leading CTData, Ms. Riordan-Nold worked as a research analyst for the CT Economic Resource Center and before that for the Connecticut Legislature in the Program Review and Investigations Committee. Ms. Riordan-Nold has a Bachelor degree in Mathematics from Boston College and a Masters in Public Policy from the University of Chicago.

Candidate Statement

I have been both an attendee and a presenter at the APDU conferences for the past five years. It is great to be a part of a community that is working on improving public access to data and sharing new ways to access and improve its use. I am always amazed at the initiatives happening at the federal level and leave each conference with new ideas and data to share with the community of data users we serve in Connecticut.

If elected, I would be interested in finding ways to increase the membership to include more state level data users. Federal data is critical to much of the work at the state level and I see an opportunity for sharing and increasing the knowledge of both state and federal data users to help improve the work at all levels of government.

I also see an important role of the APDU in staying connected and informed about the evolving Disclosure Avoidance Policy implementation. I believe this should be at the forefront of all data discussions and was encouraged by the attention it received during this year’s conference. The APDU plays an important role in guiding the data user community on how to use the data but can also advocate to make sure the data is provided in such a way that it can be used for informed decision making at all levels of government. I would encourage the APDU to take a more active role in advocating for transparency around the implementation.

If elected, I hope to provide a state level perspective and contribute to the growth of the organization by helping to broaden the membership to include a more diverse group of data users.

Candidate for At-Large Director: Daniel Quigg, CEO, Public Insight Corporation

Dan Quigg is a serial entrepreneur focusing primarily on software analytics. Dan has served as CEO of Public Insight Data Corporation (Public Insight) since 2012, a business intelligence company that transforms public data into actionable insights with solutions in career and workforce development, staffing and recruiting, and higher education benchmarking. Public Insight leverages industry and government data in its self-service business intelligence applications such as Insight for Work and Insight for Higher Education.

Over his career of over thirty years Dan has founded or led eight early stage businesses. Dan is an Ernst & Young Entrepreneur of the Year finalist and winner of the Smart Business Rising Star Award. He successfully sold three businesses, two to public technology firms where he took a senior executive position. He has also served on the adjunct entrepreneurship faculty of Kent State University and has served on multiple corporate boards. Dan has also served as an advisor for micro-economic development in developing countries, primarily Rwanda and Peru. He currently is on the National Council of the Valparaiso University College of Business.

Dan received his B.S. from Valparaiso University in 1981 and his CPA in 1983.  He received his MBA from Case Western Reserve University Weatherhead School of Management in May 2007.  Dan was the inaugural winner of the Weatherhead Executive MBA Leadership Award as nominated by his peers.

Candidate Statement

I have always had a passion for data and am a self-described “data junkie”. I founded Public Insight in 2012 because I saw an asset in public data that was dramatically underutilized. Public Insight was built around that very concept.

I have been involved with APDU since starting Public Insight. I and my company have benefitted greatly from the research, webinars, and conferences. However, I feel that there is a large, untapped audience in the private sector that utilize public data and are not being reached by APDU. I see it every day. Should you decide to accept my candidacy into APDU, I would advocate for outreach to the private sector. Given my startup experience, I can add a lot of value in how to reach and extend APDU’s reach into the private sector.

I would advocate for more online education and training to the private sector. In the labor market particularly, there is a hunger for more information due to pandemic-induced volatility. I see courses like what is currently being offered through the Labor Market Institute (LMI) as a vehicle to reach a broader audience with minimal investment and risk.

My impressions of APDU suggest it is moving more and more to policy and advocacy. My interest is not in these areas nor do I add any value. I am a user of public data and want to see its value disseminated. This is where I can add value and where the mission is aligned with Public Insight.

Candidate for At-Large Director: Lori Turk-Bicakci, Ph.D., Director, Lucile Packard Foundation for Children’s Health

Lori Turk-Bicakci, Ph.D., is Director for Kidsdata, a program of the Lucile Packard Foundation for Children’s Health. She promotes data-based decision making and action to improve children’s health and well-being, and she contributes to the quality, relevance, and utility of the data and content on kidsdata.org.  She oversees the process of collecting, preparing, and releasing data from more than 35 federal and state data sources. Before joining the Foundation, Dr. Turk-Bicakci was a senior researcher at American Institutes for Research. She has extensive experience with data collection, analysis, and reporting for education, social services, and other research projects that support children’s long-term health and development. Prior to her work in research, Dr. Turk-Bicakci was a middle school social studies teacher.

Fundamentals of Data Science and Visualization

Virtual Training

November 9 – 19, 2020

Classes: Nov 9, 10, 16, 17, 19 from 2:00-4:00 pm Eastern

Office Hours: Nov 9, 10, 16, 17, 19 from 4:00-5:00 pm Eastern

DOWNLOAD AGENDA

PDF Registration                                                          Online Registration

Data analysts can use a variety of methods and tools to accomplish their goals. With a deeper understanding of data visualization software packages, your organization can produce more intuitive data visualizations in less time and identify the best software solutions to optimize your team’s workflows.

In this course, we will review best practices in data visualization design and use cases for Excel, Tableau, and R (programming language).

Learn how to clean and format data in Excel, create interactive dashboards in Tableau, and clean and visualize data in R. This course will help participants identify use-cases for each software package that maximize impact with minimal effort, expanding participants’ toolbox as an analyst.

Join us to learn about how your organization can better leverage data visualization software!

Meet Your Instructor:

Lee Winkler joined the Center for Regional Economic Competitiveness (CREC) in 2018 after graduating with a Master’s in Public Policy from the George Washington University. He currently supports projects analyzing state-level certification and license attainment and the prevalence of educational and workforce credentials. Lee regularly uses Tableau to clean data, mine insights and create interactive visualizations and is excited to help the class find how Tableau can add value to their workflow.

Registration:
APDU Members: $390
Non-Members: $715

Looking Back on the 2020 APDU Annual Conference

 

With the 2020 APDU Annual Conference in the rearview mirror, now is a good time to reflect on the week and look ahead to what’s next.

This year’s conference, as so many things in 2020, was disrupted but not diminished. While we didn’t have the opportunity to meet with each other in person, the virtual format enabled some of our friends from around the country to participate who might not have been able to otherwise.

Speakers like danah boyd of Microsoft Research and Data & Society Research Institute (excerpted above) brought a unique perspective to the conference, challenging our thinking about from issues ranging from how we approach issues of privacy and accuracy to the impacts misinformation and data voids can have on our understanding of data quality and reliability.

Federal agency leaders such as Deborah Stempowski, Brian Moyer, Bill Beach, and Mary Bohman provided insider insights into their organizations.

Speakers from universities and research organizations across the country covered hot topics such as data on COVID-19, evictions, policing, and more.

Speakers from the Census Bureau, universities, and nonprofits discussed how the Disclosure Avoidance System will affect the quality of Census data.

Attendees met with APDU board members in a series of town hall conversations on a variety of topics – offering a promising way for APDU members to connect with one another.

This year’s conference was a success for a variety of reasons – but the biggest reason was the engagement of our attendees and speakers. Stay tuned for continued quality programming in Fall 2020!

Intermediate Data Visualization Techniques in Tableau

August 25-September 3, 2020

Virtual Training

AGENDA

A picture is worth a thousand words. Use data to state your case using easy-to-understand data visualization tools. Give your audience the freedom to adapt your data in new ways in interactive dashboards that answer immediate questions and uncovers new insights. Data visualization tools can help you communicate better both internally and with your partners.

Tableau can help you produce more intuitive data visualizations, and we can show you how. In this course, you will build your skills in making appropriate graphics, but you will also incorporate complex calculations in ways that improve insights, make charts more relevant, and create the most impactful dashboard graphics.

Learn how to clean, shape, aggregate, and merge frequently used public data in Tableau Prep. Then, organize your visualizations into sleek dashboards in Tableau Desktop. We will provide helpful tips on how to analyze, design, and communicate these data in ways that will wow your supervisor and organization’s customers.

Training Prerequisites:

Skills: Participants must have a basic understanding of how Tableau works before attending this class, including knowledge of Tableau terminology, uploading data, editing data sources, and creating basic charts. Attendees should be familiar with all materials presented in the Pre-Session Videos: Overview of Charts and Calculated
Fields.
Tools: Laptop, wired mouse, Tableau Desktop (personal, professional, or public version), and Tableau Prep.
• Public version of the Tableau desktop is available at:
https://public.tableau.com/s/download
• Tableau Prep Software can be downloaded here:
https://www.tableau.com/products/prep/download

**Zoom will be required for this training – if you have Zoom restrictions for a work laptop, we recommend using a personal laptop or desktop. We do not recommend using an iPad for this training.
Pricing
APDU, C2ER, LMI Institute Premium Organizational Members $ 495
APDU, C2ER, LMI Institute Individual & Organizational Members $ 575
Non-Members $ 715

CANCELLATION POLICY: APDU must confirm cancellation before 5:00 PM (Eastern Standard Time) on August 14, 2020, after which a $135 cancellation fee will apply. Substitute registrations will be accepted.

APDU Member Blog Post: It’s not too late to rebuild data-user trust in Census 2020 data products

By: Jan Vink, Cornell Program on Applied Demographics
Vice chair of the Federal State Cooperative on Population Estimates Steering Committee
Twitter: @JanVink18
Opinions are my own

The Census Bureau is rethinking the way it will produce the data published from the Census 2020. They argue that the old way is not good enough anymore in this day and age because with enough computer power someone could learn too many details about the respondents.

There are two separate but related aspects to this rethinking:

  1. The table shells: what tabulations to publish and what not to publish
  2. Disclosure Avoidance Systems (DAS) that add noise to the data before creating these tables

Both aspects have huge consequences for data users. A good place to start reading about this rethinking is the 2020 Census Data Products pages at the Census Bureau.

The Census Bureau is aware that there will be this impact and has asked the data-user community for input in the decision process along the way. There were Federal Register Notices asking for use cases related to the 2010 tables, an ask for feedback on a proposed set of tables. There were publications of application of a DAS to 1940 Census data, 2018 PL94-171 data from the 2018 test and the 2010 Demonstration Products. Currently the Census Bureau is asking for feedback on the measurement of progress of the DAS implementation they plan to use for the first set of products coming out of the Census.

The intentions of stakeholder involvement were good BUT didn’t lead to buy-in from those stakeholders and many are afraid that the quantity and quality of the published data will severely impact the capability to make sound decisions and do sound research based on Census 2020 and products that are directly or indirectly based on that data. Adding to this anxiety is the very difficult unexpected circumstances the Census Bureau has to deal with while collecting the data.

From my perspective as one of those stakeholders that is wary about the quantity and quality of the data there are a few things that could have gone better:

  • The need for rethinking is not communicated clearly. For example, I cannot find a Census Bureau publication that plainly describe the re-identification process, all I can find are a few slides in a presentation. A layman’s explanation of the legal underpinning would be helpful as well as some argue that there has been a drastic reinterpretation.
  • The asks for feedback were all very complicated, time consuming and reached only a small group of very dedicated data users that felt tasked to respond for many and stick with the low hanging fruits.
  • It is not clear what the Census Bureau did with the responses.
  • The quality of the 2010 Demonstration Products was very low and would have severely impacted my use of the data and many others uses.
  • Most Census Bureau communications about this rethinking consisted of a mention of a trade-off between privacy and accuracy followed by a slew of arguments about the importance of privacy and hardly any mention how important accuracy is for the mission of the Census Bureau. Many stakeholders walked away with the feeling that the Bureau feels responsibility for privacy protection, but not as much for accuracy.

There is a hard deadline for the production of the PL94-171 data, although Congress has the power to extend that date because of the Covid-19 pandemic. Working back from that, I am afraid that decision time is not too far away. The Census Bureau is developing the DAS using an agile system with about 8 weeks between ‘sprints’. The Bureau published updated metrics from sprint II at the end of May, but already started with sprint IV at that time. If we keep the 8 weeks between sprints this implies in my estimation that there is room on the schedule for 2 or 3 more sprints and very little time to rebuild trust from the data-user community.

Examples of actions that would help rebuilding some trust are:

  • Appointing someone that is responsible for the stakeholder interaction. So far, my impression is that there is no big picture communication plan and two-way communication depends too much on who you happen to know within the Census Bureau. Otherwise the communication is impersonal and slow and often without a possibility for back-and-forth. This person should also have the seniority to fast-trac the publication review process so stakeholders are not constantly 2 steps behind.
  • Plan B. A chart often presented to us is a line that shows the trade-off between privacy and accuracy. The exact location of that line depends on the privacy budget and the implementation of the DAS and the Census Bureau seems to have the position that they can implement a DAS with a sweet spot between accuracy and privacy that would be an acceptable compromise. But what if there is no differential privacy based DAS implementation (yet?) that can satisfy a minimal required accuracy and a maximal allowed disclosure risk simultaneous? So far it is an unproven technique for such a complex application. It would be good to hear that the Census Bureau has a plan B and a set of criteria that would lead to a decision to go with plan B.
  • Promise another set of 2010 data similar to the 2010 demonstration products so data users can re-evaluate the implications of the DAS. This should be done in a time frame that allows for tweaks to the DAS. Results of these evaluations could be part of the decision whether to move to plan B.
  • Have a public quality assurance plan. The mission of the Census Bureau is to be the publisher of quality data, but I could not find anything on the Census Bureau website that indicates what is meant with data quality and what quality standards are used. Neither could I find who in the Census Bureau oversees and is responsible for data quality. For example: does the Bureau see accuracy and fitness for use as the same concepts? Others disagree. And what about consistency? Can inconsistent census data still be of high quality? Being open about data quality and have a clear set of quality standards would help showing that quality is of similar priority as privacy.
  • Publish a time line, with goals and decision points.
  • Feedback on the feedback: what did the Bureau do with the feedback? What criteria were used to implementing some feedback but not others?

Time is short and stakes are high, but I think there are still openings to regain trust of the data community and have Census data products that will be of provable high quality and protects the privacy of the respondents at the same time.

 

 

 

APDU Launches 2020 Annual Conference

APDU is opening registration for the 2020 Annual Conference, set to be held at the Key Bridge Marriott in Arlington, VA on July 29-30, 2020. Trending issues in the world of data – issues of privacy, accuracy, and access – are profoundly changing how we think about the collection, production, sharing, and use of data. Register for the APDU Annual Conference today to learn how the coronavirus is impacting public data and evidence-based policymaking. Attendees will also hear about outcomes from the decennial census and the privacy and public health issues that are impacting it in 2020.

We recognize the tentative nature of in-person events in these uncertain times, but we will continue to plan for the conference with the hopes of a return to normal. We are evaluating plans for a hybrid virtual conference to ensure that the conference will be delivered either live or online. Please know that cancellation fees will not apply to those who register early. In the case the conference cannot be held in-person it will transition to online, your registration will automatically transfer to the online content. If you don’t wish to attend online, we will provide a full refund on request. We will monitor these issues closely and be responsive to our members and partners.

APDU Member Post: Assessing the Use of Differential Privacy for the 2020 Census: Summary of What We Learned from the CNSTAT Workshop

By:

Joseph Hotz, Duke University

Joseph Salvo, New York City Department of City Planning

Background

The mission of the Census Bureau is to provide data that can be used to draw a picture of the nation, from the smallest towns and villages to the neighborhoods of the largest cities. Advances in computer science, better record linkage technology, and the proliferation of large public data sets have increased the risk of disclosing information about individuals in the census.

To assess these threats, the Census Bureau conducted a simulated attack, reconstructing person-level records from published 2010 Census tabulations using its previous Disclosure Avoidance System (DAS) that was based in large part on swapping data records across households and localities. When combined with information in commercial and publicly available databases, these reconstructed data suggested that 18 percent of the U.S. population could be identified with a high level of certainty. The Census Bureau concluded that, if adopted for 2020, the 2010 confidentiality measures would lead to a high risk of disclosing individual responses violating Title 13 of the U.S. Code, the law that prohibits such disclosures.

Thus, the Census Bureau was compelled to devise new methods to protect individual responses from disclosure. Nonetheless, such efforts – however well-intentioned – may pose a threat to the content, quality and usefulness of the very data that defines the Census Bureau’s mission and that demographers and statisticians rely on to draw a portrait of the nation’s communities.

The Census Bureau’s solution to protecting privacy is a new DAS based on a methodology referred to as Differential Privacy (DP). In brief, it functions by leveraging the same database reconstruction techniques that were used to diagnose the problem in the previous system: the 2020 DAS synthesizes a complete set of person- and household-level data records based on an extensive set of tabulations to which statistical noise has been added. Viewed as a continuum between total noise and total disclosure, the core of this method involves a determination regarding the amount of privacy loss or e, that can be accepted without compromising data privacy while ensuring the utility of the data. The key then becomes “where to set the dial”—set e too low and privacy is ensured at the cost of utility, but set e too high and utility is ensured but privacy in compromised. In addition to the overall level of e, its allocation over the content and detail of the census tabulations for 2020 is important. For example, specific block-level tabulations needed for redistricting may require a substantial allocation of the privacy-loss budget to achieve acceptable accuracy for this key use, but the cost is that accuracy of other important data (including for blocks, such as persons per household) will likely be compromised. Finding ways to resolve these difficult tradeoffs represents a serious challenge for the Census Bureau and users of its data.

The CNSTAT Workshop

In order to test how well this methodology worked in terms of the accuracy of noise-infused data, the Census Bureau issued special 2010 Census files subject to the 2020 DAS. The demonstration files applied the 2020 Census DAS to the 2010 Census confidential data — that is, the unprotected data from the 2010 Census that are not publicly available. The demonstration data permit scientific inquiry into the impact of DP. In addition, the Census commissioned the Committee on National Statistics (CNSTAT) of the National Academies of Sciences, Engineering and Medicine to host a 2-day Workshop on 2020 Census Data Products: Data Needs and Privacy Considerations, held in Washington, DC, on December 11-12, 2019. The two-fold purpose of the workshop was:

  • To assess the utility of the tabulations in the 2010 Demonstration Product for specific use cases/real-life data applications.
  • Generate constructive feedback for the Census Bureau that will be useful in setting the ultimate privacy loss budget and on the allocation of shares of that budget over the broad array of possible tables and geographic levels.

We both served as the co-chairs of the Committee that planned the Workshop. The Workshop brought together a diverse group of researchers who presented findings for a wide range of use cases that relied on data from past censuses.

These presentations, and the discussions surrounding them, provided a new set of evidence-based findings on the potential consequences of the Census Bureau’s new DAS. In what follows, we summarize “what we heard” or learned from the Workshop. This summary is ours alone; we do not speak for the Workshop’s Planning Committee, CNSTAT, or the Census Bureau. Nonetheless, we hope that the summary below provides the broader community of users of decennial census data with a better understanding of some of the potential consequences of the new DAS for the utility of the 2020 Census data products. Moreover we hope it fosters an on-going dialogue between the user community and the Census Bureau on ways to help ensure that data from the 2020 Census are of high quality, while still safeguarding the privacy and confidentiality of individual responses.

What We Heard

  • Population counts for some geographic units and demographic characteristics were not adversely affected by Differential Privacy (DP). Based on results presented at the Workshop, it appears that there were not, in general, differences in population counts between the 2010 demonstration file at some levels of geography. For the nation as a whole and for individual states, the Census’s algorithm, ensured that that counts were exact, i.e., counts at these levels were held invariant by design. Furthermore, the evidence presented also indicated that the counts in the demonstration products and those for actual 2010 data were not very different for geographic areas that received direct allocations of the privacy budget, including most counties, metro areas (aggregates of counties) and census tracts. Finally, for these geographic areas, the population counts by age in the demonstration products were fairly accurate when using broader age groupings (5-10 year groupings or broader ones), as well as for some demographic characteristics (e.g., for non-Hispanic whites, and sometimes for Hispanics).
  • Concerns with data for small geographic areas and units and certain population groups. At the same time, evidence presented at the Workshop indicated that most data for small geographic areas – especially census blocks – are not usable given the privacy-loss level used to produce the demonstration file. With some exceptions, applications demonstrated that the variability of small-area data (i.e., blocks, block groups, census tracts) compromised existing analyses. Many Workshop participants indicated that a larger privacy loss budget will be needed for the 2020 Census products to attain a minimum threshold of utility for small-area data. Alternatively, compromises in the content of the publicly-released products will be required to ensure greater accuracy for small areas.

The Census did not include a direct allocation of the privacy-loss budget 2010 demonstration file to all geographic areas, such as places and county subdivisions, or to detailed race groups, such as American Indians. As noted by numerous presenters, these units and groups are very important for many use cases, as they are the basis for political, legal, and administrative decision-making. Many of these cases involve small populations and local officials rely on the census as a key benchmark; in many cases, it defines who they are.

  • Problems for temporal consistency of population counts. Several presentations highlighted the problem of temporal inconsistency of counts, i.e., from one census to the next using DP. The analyses presented at the Workshop suggested that comparisons of 2010 Census data under the old DAS to 2020 Census data under DP may well show inexplicable trends, up or down, for small geographic areas and population groups. (And comparisons of 2030 data under DP with 2020 data under DP may also show inconsistencies over time). For example, when using counts as denominators to monitor disease rates or mortality at finer levels of geography by race, by old vs young, etc., the concern is that it will be difficult to determine real changes in population counts, and, thus, real trends in disease or mortality rates, versus the impact of using DP.
  • Unexpected issues with the post-processing of the proposed DAS. The Top-Down algorithm (TDA) employed by the Census Bureau in constructing the 2010 demonstration data produced histograms at different levels of geography that are, by design, unbiased —but they are not integers and include negative counts. The post-processing required to produce a microdata file capable of generating tabulations of persons and housing units with non-negative integer counts produced biases that are responsible for many anomalies observed in the tabulations. These are both systematic and problematic for many use cases. Additional complications arise from the need to hold some data cells invariant to change (e.g., total population at the state level) and from the separate processing of person and housing unit tabulations.

The application of DP to raw census data (the Census Edited File [CEF]) produces estimates that can be used to model error, but the post-processing adds a layer of complexity that may be very difficult to model, making the creation of “confidence intervals” problematic.

  • Implications for other Census Bureau data products. Important parts of the planned 2020 Census data products cannot be handled by the current 2020 DAS and TDA approach. They will be handled using different but as-yet-unspecified methods that will need to be consistent with the global privacy-loss budget for the 2020 Census. These products were not included in the demonstration files and were out of scope for the Workshop. Nonetheless, as noted by several presenters and participants in the Workshop, these decisions raise important issues for many users and use cases going forward. To what extent will content for detailed race/Hispanic/nationality groups be available, especially for American Indian and Alaska Native populations? To what degree will data on household-person combinations and within-household composition be possible under DAS?

For example, while the Census Bureau has stated that 2025 will be the target date for the possible application of DP to the ACS, they indicated that the population estimates program will be subject to DP immediately following 2020. These estimates would then then be used for weighting and post-stratification adjustments to the ACS.

  • Need plan to educate and provide guidance for users of the 2020 Census products. Regardless of what the Census Bureau decides with respect to ε and how it is allocated across tables, the Workshop participants made clear that a major re-education plan for data users’ needs to be put in place, with a focus on how best to describe key data and the shortcomings imposed by privacy considerations and error in general. Furthermore, as many at the Workshop voiced, such plans must be in place when the 2020 Census products are released to minimize major disruptions to and problems with the myriad uses made of these data and the decisions based on them.
  • Challenging privacy concerns and their potential consequences for the success of the 2020 Census. Finally, the Workshop included a panel of experts on privacy. These experts highlighted the disclosure risks associated with advances in linking information in public data sources, like the decennial census, with commercial data bases containing information on bankruptcies and credit card debt, driver licenses, and federal, state and local government databases on criminal offenses, public housing, and even citizenship status. While there are federal and state laws in place to protect the misuse of these governmental databases as well as the census (i.e., Title 13), their adequacy is challenged by advances in data linkage technologies and algorithms. And, as several panelists noted, these potential disclosure risks may well undercut the willingness of members of various groups – including immigrants (whether citizens or not), individuals violating public housing codes, or those at risk of domestic violence – to participate in the 2020 Census.

The Census Bureau has recently stated that it plans to have CNSTAT organize a follow-up set of expert meetings to “document improvements and overcome remaining challenges in the 2020 DAS.” In our view, such efforts, however they are organized, need to ensure meaningful involvement and feedback from the user community. Many within that community remain skeptical of the Bureau’s adoption of Differential Privacy and its consequences for their use cases. So, not only is it important that Census try to address the various problems identified by Workshop presenters and others who evaluated the 2010 demonstration products, it also is essential that follow-up activities are designed to involve a broader base of user communities in a meaningful way.

We encourage members of the census data user community to become engaged in this evaluation process, agreeing, if asked, to become involved in these follow-up efforts. Such efforts will be essential to help ensure that the Census Bureau meets its dual mandate of being the nation’s leading provider of quality information about its people and economy while safeguarding the privacy of those who provide this information.

2020 APDU Conference Call for Proposals

#Trending in 2020: Data Privacy, Accuracy, and Access

APDU is welcoming proposals on any topic related to the privacy, accuracy, and access of public data.  Proposals can be for a single presentation or panel, whether based on a particular project, data practice, or formal paper.  In keeping with the theme of the conference, our interest is in highlighting the breadth of public data to both producers and consumers of public data.  Some examples of topics might cover:

  • Privacy
    • Differential privacy and tiered data
    • State/local data privacy issues
    • Data Suppression
    • Corporate data privacy (ex. Facebook’s use of differential privacy)
  • Accuracy
    • Machine learning and the use of programming languages
    • How data accuracy will affect redistricting or federal allocations
    • Federal agencies data protection actions’ impact on other agency data
    • Synthetic or administrative data
    • Decennial Census
      • Citizenship question
      • Complete Count Committee
  •  Access
    • Future public data and policy developments
    • Current availability of public data (health, education, the economy, energy, the environment, climate, and other areas)
    • Federal statistical microdata such as ResearchDataGov
    • Federal Data Strategy updates and advocacy

Proposal Deadline: February 28, 2020.

You may submit ideas for a single presentation or a full panel (three presenters, plus a moderator). However, it is possible that we will accept portions of panel submissions to combine with other presenters. Submissions will be evaluated on the quality of work, relevance to APDU Conference attendees, uniqueness of topic and presenter, and thematic fit.

Please submit your proposal using the Survey Monkey collection window below.  Proposals will need to be submitted by members of APDU, and all presenters in a panel must register for the conference (full conference registration comes with a free APDU membership).  Proposers will be notified of our decision by March 13, 2020.

About APDU

The Association of Public Data Users (APDU) is a national network that links users, producers, and disseminators of government statistical data. APDU members share a vital concern about the collection, dissemination, preservation, and interpretation of public data.  The conference is in Arlington, VA on July 29-30, 2020, and brings together data users and data producers for conversations and presentations on a wide variety of data and statistical topics.

Create your own user feedback survey