Emerging Best Practices for Developing Effective, Measurable State Digital Equity Metrics

Benton Institute for Broadband & Society

Wednesday, December 13, 2023

Digital Beat

Emerging Best Practices for Developing Effective, Measurable State Digital Equity Metrics

Ziggy Rivkin-Fish
       Rivkin-Fish

An extraordinary, first-ever, nationwide effort in digital equity and opportunity is currently underway. Thanks to funding provided under the Infrastructure Investment and Jobs Act (IIJA), all 50 states and six territories are in the final stages of developing first-of-their-kind statewide digital equity and digital opportunity plans (Plans). Only a year ago, not a single state or territory had developed a comprehensive statewide Plan of this scale focused on the full spectrum of internet adoption issues.

As we approach the end of the year, and with many plans now made available for public review and comment, I step back here to consider some of what has been developed and achieved—even before the Plans are provided to the overseeing federal entity, the National Telecommunications and Information Administration (NTIA) of the U.S. Department of Commerce.

I consider the critical issue of metrics—what are the goals that are developed by the states, and how do they propose to measure current status and then track changes toward those goals?

A review suggests that a range of important best practices for rigorous data collection and measurement are emerging. While every state has adopted slightly different approaches to their Plans, there is a high degree of overlap in the types of metrics adopted by each. This is perhaps not surprising. The states all seek to measure variations on the same issues identified by NTIA for the same demographic groups (known as “Covered Populations”[i]); they all are responding to NTIA’s large, comprehensive framework for measuring digital opportunities; and they all seek, quite reasonably, to establish metrics they will be able to measure over time – even in the absence of dedicated funding.

The states’ baselines and targets vary across their measurable objectives—because each state obviously has collected its own, original data on its current environment, and each state has established its own goals for the future. The similarities seen across the Plans reflect the emergence of best practices among state broadband offices in designing and implementing data collection and analysis strategies that will align with NTIA’s requirements and produce effective measurements over time.

NTIA Adopted a Unified Framework for Data Collection

First, a bit of background. Relying on the statutory language, NITA last year released a Notice of Funding Opportunity and subsequent guidance that established a comprehensive framework for measuring digital equity across a range of defined “Covered Populations.” NTIA required that each state evaluate the Covered Populations across five types of digital capabilities. The associated measurable objectives—for each Covered Population and with respect to the five types of capabilities—constitute a mandatory and critical element of each state’s Plan and of NTIA’s review and approval of the Plans.

Covered Populations Face Similar Barriers Regardless of the State in Which They Reside

One key insight gleaned from a review of the many state Plans released for public comment is that, at a basic level, the Plans have substantial similarities because many of the same barriers are faced by Covered Populations in every state, even though detail regarding the barriers at the local level can be unique (for example, lack of nearby facilities for digital skills classes, no public transportation to libraries, and so on). As a result, different Plans frequently come to the same qualitative conclusions and state-level metrics frequently are attempting to measure the same things.

As a qualitative example, the states of Kansas, Louisiana, and Utah all conclude in their unique Plans that barriers faced by older adults in device and internet use include declining physical and cognitive abilities, and a difficulty in keeping up with the rate of technological change. It should come as no surprise that these conclusions are reached by each of these states, independently, given that the challenges faced by older adults (and other Covered Populations) in different states will inevitably have more commonalities than differences. Each state’s data-based analysis is unique, but each state has identified similar health implications related to aging. Many (if not most) states’ Plans will similarly identify these universal challenges for their aging populations.

Second, in addition to similarities in a single Covered Population across states, there also exist important similarities across different Covered Populations in each state. For example, by comparison to younger residents, a larger portion of seniors is disabled. And compared to people without disabilities, a very large portion of disabled seniors live on low, fixed incomes. These trends exist nationwide. As a result, it should be expected that seemingly diverse measures of the current state of broadband adoption and use in any given state for these populations are not so different from each other.

A third key insight gleaned from the review is that most metrics for broadband adoption and use are largely a function of income distributions within groups—even when attempting to understand their relationship to other demographic factors such as race or veteran status. Many barriers either directly or indirectly derive from income or can be addressed with adequate funds.

For example, while individuals with disabilities may struggle more to adopt broadband and face barriers for accessing on-premises resources, or having access to appropriate digital assistive devices, the available metrics will capture what portion of individuals living with disabilities lack an income adequate to afford more assistive forms of transportation and devices. While in an ideal world, quantitative metrics would capture more detailed barriers such as the portion of individuals living with disabilities who report struggles in accessing on-premises resources, it would be virtually impossible to capture those specific barriers quantitively or in mass.

As a result, the states appear to have focused on metrics that align with macro-impact barriers that have available measurable data instead of trying to generate data for micro-impact barriers that would have limited policy options for the states, and not enough granularity for local communities to use the data for anything actionable.

Each State has Customized its Measurable Objectives

Consistent with NTIA’s requirements, each state developed metrics, baseline values, and target values that reflect its unique needs and gaps. The customization—backed by comprehensive stakeholder outreach efforts conducted by each state—is especially evident in baselines and target values, reflecting the different geographical and socio-economic challenges of each state. In addition, some states had access to unique metrics from state agencies that measured a barrier faced by a particular Covered Population.

Review of the states’ draft Plans suggests a significant degree of similarity among the types of metrics adopted in cases where quantitative data are widely available—because the states are all measuring progress across the same Covered Populations, and because limited sources of data exist for each one.

For example, most states have easy access to data (from the Federal Communications Commission and their own Broadband Equity, Access, and Deployment (BEAD) Program planning efforts) on the availability of broadband infrastructure at a geographic level. But the states all face the same general challenge in mapping access and adoption among Covered Populations. This is how best practices emerge among experienced, highly capable state broadband offices working in parallel toward a shared goal.

Multiple states chose to analyze data at census tract and county levels, enabling them to use data from other sources on digital barriers and sociodemographic factors, such as Census data. The resulting similarity of metric types stands to reason, given that all the states are working toward the same measurement goals as required by NTIA.

But while the metric types bear similarities, the actual data baselines adopted by each state vary based on localized conditions—and the target values vary in line with the state’s unique goals (and, of course, with each state’s realistic assessment of potential future progress in light of the resources it may have to address its gaps).

For example, observe the following metrics and goals used to benchmark states’ progress in serving all locations with 100/20 Mbps broadband service. Each one uses the FCC’s National Broadband Map as its data source, but each has its own unique baseline, short-term goal, and long-term goal.

State

Metric

Source

Baseline

Short-term goal

Long-term goal

OR

Percentage of locations with access to 100/20 broadband

FCC

89%

95%

98%

GA

Percentage of locations with access to 100/20 broadband (includes all Covered Populations)

FCC

90%

95%

98%

NC

Percentage of locations with access to high-speed internet (100/20 Mbps)

FCC

98%

100%

MD

Percentage of locations with access to 100/20 broadband

FCC

97%

98%

99%

Some states also choose to rely on state agency or library system sources to measure progress, but few such agencies track broadband-related metrics on a consistent basis, creating challenges for medium- and long-term measurement of progress.

As a result, one clear commonality and best practice among states is the use of American Community Survey data—a reliable, consistent, and long-term data set—to measure affordability and access to service and devices.

Some states also collect survey data themselves to supplement American Community Survey data. For example, the District of Columbia and Kansas use the American Community Survey’s readily available data regarding device access. Maryland, however, found that access to devices is so high that tracking that data point doesn’t illuminate the challenges faced by Covered Populations. Rather, Maryland used its own scientific phone survey to capture data regarding how long it would take for different Covered Populations to replace a damaged or lost computer—a metric that Maryland found to be a more useful measure of the challenges faced by Covered Populations with respect to device use.

State

Metric

Source

Baseline

Short-term goal

Long-term goal

DC

Percentage device access for aging individuals

ACS

89%

85%

95%

KS

Percentage of aging individuals with a broadband-enabled device

ACS

73.1%

78.5%

94.7%

MD

Percentage of aging individuals that can get a broken or lost computing device fixed or replaced within a month

State survey

93%

93%

95%

States Sought to Develop Consistent and Sustainable Data Collection Frameworks for Measurable Objectives

This early review of the public comment versions of the Plans suggests that all the states faced the same obstacle: To be meaningful, measurable objectives need to be tracked over time and that means that they need to exist within a sustained data collection framework. Amassing data for policy purposes is very expensive, so developing measurable objectives for a Digital Equity/Opportunity Plan requires the states to commit to long-term costs. The metrics are intended to be collected periodically, with a consistent methodology, and consistent wording of questions.

And, of course, the resources for collecting the data, including funding for staff to collect and analyze data, need to be secured. This requirement means that states cannot rely on costly or one-off data sources, and/or data that require resource-intensive outreach. As a result, data from other state agencies or community organizations must clear a very high bar to be included in a Plan’s data collection framework. The data must be repeatable, reliable, and inexpensive to gather—and the state must have transparency on data collection methodology.

Given all this, most states chose to rely on existing federal data sets, supplemented with their own survey data as necessary. Quite reasonably, with limited future resources in the near term and no assurance of future federal or state funding for sustained data collection and analysis, states opted to rely on existing federal sources of data, such as the Census and the FCC, in developing their Plans.

Where economically feasible, many states also supplemented these sources with a scientific survey that could be repeated on a periodic basis at a relatively modest cost. A scientific survey fills some of the gaps in the federal data and gives states the ability to develop additional data on some Covered Populations. The scientific survey is also an effective instrument to add granularity in the future to measure programmatic impacts, such as, for example, awareness of where in a local community respondents could access digital skills classes.

At the same time, some granularity is impossible to get at. For example, states struggled to find measurable, reliable data regarding the “incarcerated individual” Covered Population. Data regarding internet use of incarcerated populations is not published or made otherwise available by most state or federal entities, even if they collect such data, for both privacy and other reasons. And while states have attempted to collect data regarding formerly incarcerated individuals through surveys, this has proven largely unsuccessful because few survey respondents are willing to share their status as formerly incarcerated and because states are appropriately concerned about the privacy implications of collecting such data. Given the lack of data for this population, many states were frank in their draft Plans that they did not have access to baseline data for incarcerated individuals and did not anticipate being able to measure changes for that population over time.

State Broadband Offices have Limited Ability to Direct Data Collection by Other State Agencies

The limited range of data types used by many states is also a reflection that, by and large, the state broadband offices are operating within the same type of institutional framework: They are departments and agencies within state government that are pursuing important goals, supported by their governors’ offices, but they operate within the confines of their authority. As a result, while the Plans all identify data partners across the state governments, the state broadband offices have limited ability to direct resources.

In some cases, states may have access to existing data already provided by a statewide agency or organizations for Covered Populations. For example, if a state library system is already surveying its member libraries on programs such as digital skills classes and device lending on an annual basis, this would be an obvious metric for the state broadband office to seek to incorporate into its Plan—with a reasonable level of confidence that it will be available in future years. One instance of this can be found in Wyoming, which taps the state’s library system for reporting measurable objective metrics on device lending program participation.

At the same time, however, state broadband offices can’t direct the content of the library’s data efforts and don’t have authority to require that libraries also collect information about their clients’ demographic and socioeconomic information so Covered Populations receiving the services could be identified. Many library systems are concerned about collecting (and sharing) such data given their longstanding commitments to privacy.

This issue also represents a challenge with respect to incarcerated individuals. There are no federal data on broadband use among incarcerated individuals, and for obvious reasons: Researchers cannot survey prison populations, let alone distinguish between federal and non-federal incarcerated people. Further, even if the state agency responsible for overseeing prisons collects relevant data, many have proven unwilling to share such data, for privacy and other reasons, including potentially that their portfolio of responsibilities does not align with that of the state broadband office or the goals of NTIA’s digital equity efforts.

In sum, state broadband offices have generally all faced the challenge that other agencies have their own missions and resources, and it is therefore challenging to rely on them for data that will enable tracking of a particular metric in the state’s Plan. Other agencies may or may not be willing or resourced to collect data on a recurring basis, with the same methodology, and with a robust sampling of the same population each time.

The Best-Practice Approach to Plan Metrics may Help Local Communities with their Benchmarking

Finally, state efforts to define metrics and collect data will benefit local communities. Local governments and nonprofits can utilize Plan metrics as a starting point and benchmark for studying specific Covered Populations or the community at large. The baselines and statewide objectives are informative measures against which to compare a given local community. A local survey and federal data at a local or regional level can be used for direct comparisons.

In addition, these metrics can be a starting point for examining barriers that are more local in nature: Localities with high elderly or disabled populations could examine specific community barriers such as transportation to digital skill classes or resources for device lending.

The data frameworks should have flexibility to be further extended in a way that preserve the principles of ensuring that the data are readily available at a statistically valid size, and rigorously and repeatably collected. If and when statewide institutions such as state agencies or community organizations are able to meet these requirements, they can be added to the framework.

Conclusion

As the few remaining states submit their draft plans for public comments and NTIA curing, and the submitted plans complete their curing phase, we should expect further convergence in the data collection frameworks adopted. The need to ensure the data collection effort is sustainable in the long term will lead states to pick up best practices from each other. At the same time, we should expect not just baseline, but also mid- and long-term measurable targets for digital access components to further diverge as they get updated and refined over time. This is because the future Digital Equity Capacity Grants and available state funding support and resources will vary from state to state and need to be adapted to the resources available. We should also see interesting divergences on program metrics as Capacity Grant funding becomes available and states decide which areas of digital equity they want to focus on and what they will require of their grant recipients to measure effectiveness and impact.

Ultimately, there are two state-level data collection frameworks that will emerge as a result of the Infrastructure Investment and Jobs Act. In addition to the State Digital Equity Plans’ measurable objectives, the future Digital Equity Capacity Grants will result in metrics tracking performance and impact of the funding programs developed by the states. It remains to be seen what impact on local community planning we will see from these data frameworks, but data collection is difficult and the need for comparing against some kind of benchmark—whether comparing against the state or other similar communities—will likely lead to some creative borrowing and convergence as well on measures tracking digital equity and participation.

Notes:

[i] Covered Populations are: households with annual incomes no more than 150 percent of the federal poverty level; people age 60 or above; incarcerated individuals; veterans; individuals with disabilities; individuals with a language barrier; members of a racial or ethnic minority group; and individuals who live in rural areas.


Ziggy Rivkin-Fish Vice President for Broadband Strategy at CTC Technology & Energy. He is an analyst who specializes in project management and process planning. Rivkin-Fish has advised local governments, large consortia, and other public sector clients regarding the governance issues raised by inter- and intra-jurisdictional communications projects and networks. Rivkin-Fish is CTC's lead data analyst and strategist advising states on digital equity plans.

 

The Benton Institute for Broadband & Society is a non-profit organization dedicated to ensuring that all people in the U.S. have access to competitive, High-Performance Broadband regardless of where they live or who they are. We believe communication policy - rooted in the values of access, equity, and diversity - has the power to deliver new opportunities and strengthen communities.


© Benton Institute for Broadband & Society 2023. Redistribution of this email publication - both internally and externally - is encouraged if it includes this copyright statement.


For subscribe/unsubscribe info, please email headlinesATbentonDOTorg

Kevin Taglang

Kevin Taglang
Executive Editor, Communications-related Headlines
Benton Institute
for Broadband & Society
1041 Ridge Rd, Unit 214
Wilmette, IL 60091
847-220-4531
headlines AT benton DOT org

Share this edition:

Benton Institute for Broadband & Society Benton Institute for Broadband & Society Benton Institute for Broadband & Society

Benton Institute for Broadband & Society

Broadband Delivers Opportunities and Strengthens Communities