Research Article - (2015) Volume 1, Issue 1
Health workforce plays an important role in the adoption of electronic health records (EHRs). Hospitals have cited barriers around hiring a competent workforce to adopt EHRs. The literature does not adequately relate organizational and health workforce competency with EHR adoption, which makes it difficult to monitor and evaluate any programs targeting trying to improve this problem. In this study, we develop an index measuring hospitals’ competency in adopting electronic health records (EHRs) using Item Response Theory. We test to what extent hospitals’ skill mix and high tech capacity influence their competency. We use health IT data from Health Information and Management Systems Society (HIMSS) Analytics Database and workforce and high tech data from the Centers of Medicare and Medicaid Services (CMS) Provider of Services file. We found that hospitals with a larger share of registered nurses (RNs) had higher EHR competency, but environments with more high tech potentially compete for their attention and results in lower EHR competency. Technicians, therapists, and lower skilled nurses that interact with high tech apparently transfer their knowledge and skills into higher EHR competency. Future EHR adoption incentives should target lower competency hospitals with insufficient workforce and less technological capacity.
Keywords: Health IT; Capital-skill complementarity; Item response theory; Organizational competency
The Health Information Technology for Economic and Clinical Health (HITECH) Act under the American Recovery and Reinvestment Act of 2009 (ARRA) committed $17 billion in incentives for hospitals to adopt meaningful use of certified electronic health records (EHRs) by 2015. The goal was to significantly increase the pace at which hospitals adopted EHRs and to boost hospitals’ budgets to the IT investment levels of other industries. On average, health care organizations have devoted only 2 to 3% of their budget to IT while other industries have averaged 10 to 15% [1]. As a result, at the time of HITECH’s passing, only 1.5% of hospitals reported having a comprehensive EHR system; less than 10% of US hospitals had an EHR that met the basic requirements of the federal standards [2].
Studies on EHR in hospitals focus on the current capacity mostly defined as the inventory of EHR related technologies and its application [3]. As far as the authors are aware, there are no papers that attempt to quantify organizational competency of a hospital when adopting EHRs. Even though HITECH put in place incentives for EHR adoption, the literature provides little guidance in identifying which hospitals need help in EHR adoption. The literature also does not provide clarity on how organizational factors such as workforce and technology influence a hospital’s EHR competency.
This study examines a two-part question. First, how do we measure hospital competency in EHR adoption? Second, how does health workforce relate to the hospital competency in EHR adoption? To address these questions, this study adapts a methodology called Item Response Theory (IRT) to identify hospitals that have low EHR competency, and then examines the role that prior experience with technology and labor skill mix has on that competency. Our results will provide policymakers a foundation to identify the hospitals that are in most need for targeted incentives to improve their workforce in order to move up the EHR adoption curve.
Current metric monitoring EHR adoption
The Electronic Medical Record Adoption Model (EMRAM) is the most popular metric in the literature to identify and monitor a hospital’s current EHR capacity. The EMRAM groups hospitals into eight discrete stages based on a self-report survey of their health IT inventory. Health IT refers to a wide variety of tools including computerized physician order entry systems for tests and medications, electronic prescriptions, and decision support systems. EHRs package these tools and link with EMRs. Although the HIMSS model is titled with EMR, the EMRAM essentially captures the extent to which a hospital reaches the point of an EHR (henceforth we will use EHR as the reference to health IT capability in hospitals).
The stages of the EMRAM were constructed by expert opinion of researchers at HIMSS Analytics. Although the EMRAM is widely used in the peer-reviewed literature, the EMRAM has not been validated within the peer-reviewed literature. The grouping of the tools into stages does not have clear theoretical underpinnings. The EMRAM stage is constructed based on a revealed preferences survey where we observe a hospital’s EHR capacity. The survey is not a stated preferences survey, or contingent valuation model whereby one would ask hospitals the value they place on having EHRs; EMRAM does not identify the hospital’s desires or future plans to adopt EHRs. The EMRAM was not explicitly designed to capture organizational competency. The EMRAM was defined to capture and then monitor the status of hospital EHR adoption. One concern is that the EMRAM may be subject to potential misreporting of health IT capability due to, for example, misinterpretation of how the survey defines (or does not define) a tool.
An alternative metric using item response theory
The measures that make up the EMRAM could serve as proxies for competency given that the pattern in which a hospital adopts the health IT tools reveals their abilities. In other words, hospitals are not likely to adopt a clinical decision support system, for example, without training of the staff and basic ancillary IT equipment in place. To capitalize on the available health IT capabilities data, we apply a commonly used methodology within the education literature - Item Response Theory (IRT) – to create a continuous index of organizational competency. We propose this index as a better and more meaningful measure of competency over the EMRAM.
Historically, IRT has been used to evaluate the design of educational tests and interpret scores on these exams (e.g., SAT or ACT) (see [4] for concise discussion of IRT and its application). IRT seeks to identify the latent ability of an individual based on their response to a series of questions and how others also respond to those questions. IRT also has been interpreted as measuring competency of individuals, which is the terminology we will use hereto forth. IRT serves as an alternative to “classic testing methods,” which do not take into account the competency of an individual when answering a question. As far as we know, IRT has not been applied within the health economics or health services literature to measure an organization’s latent ability, or organizational competency.
IRT has its roots in factor analysis by estimating the distance between responses to a series of questions. IRT allows us to assume that a set of latent traits of the actual technology can explain a hospital’s competency to adopt EHRs. Specifically, we assume that the hospital’s probability of adopting an EHR is a function of two factors: 1) its competency to take up the EHR and 2) the characteristics (as identified by the questions or items in a survey) of the EHR.
Empirical application of item response theory
IRT assumes that the relationship between adoption and competency can be described by a monotonically increasing function called the item characteristic curve. The item characteristic curve is also an S-shaped function that asymptotes at the values of zero and one. IRT may be estimated using a one, two or three parameter model when using dichotomous responses to survey questions (or items). The one parameter model involves only a difficulty parameter (also referred to as the item location parameter or horizontal shift of the curve). This model equates ability with the difficulty of a test item. If an item curve is shifted to the right, the item is more difficult to answer. The two parameter model adds in a discrimination factor (or slope of the curve). An item with a steeper slope, but same inflection point as another item, means that the item with the steeper slope is more difficult for individuals with low ability; a less steep slope is more difficult for those with higher competency. The three parameter model takes into account a guessing factor (or vertical shift of the curve).
We choose to use a two parameter model to construct the hospitals ability of EHR adoption. We assume that hospitals were not guessing the answers to the survey items (although we also do not make any assumptions about the potential misinterpretation of the survey items). We want the flexibility of a two parameter model to allow hospitals of different competencies to rate the difficulty of the items differently.
Under IRT, we assume uni-dimensionality of the data and local independence. Uni-dimensionality means that the set of questions only identifies one underlying competency. One could argue that several competencies may lead to health IT adoption, but IRT requires the assumption that a dominant trait exists that drives adoption. We test this assumption and discuss the results later in the paper. Local independence means that after taking competencies into account, no other relationship exists between the responses of the items (technologies) used to create the competency index.
Invariance of both item parameters and competency parameters is an important aspect of IRT. Invariance means that the item parameters do not depend on the underlying competency distribution of the hospitals; similarly, the competency parameter does not depend on the set of items. IRT assumes that if an individual takes a test multiple times, their true competency will be eventually revealed. Also, the items tested on different samples from the same population are assumed to have the same distribution of competency. Again, we test this assumption and discuss the results later in the paper. A key advantage of this assumption is that the resulting competency measure remains consistent even if the survey questions or samples change.
We do not assume the parameters in IRT are invariant over time given that learning over time could impact competency. This variance limits our interpretation of a hospital’s organizational competency to the specific time period in which the EHR data are collected. We argue, however, that our proposed organizational competency index provides a measure of a hospital’s competency relative to others and that this relative position is assumed to not change drastically over time, which our results support to a limited extent. We return to the estimation procedure of IRT along with the findings from our robustness checks below.
Capital-skill complementarity
We propose that our index resulting from IRT is a measure of organizational competency, but the index does not provide insight as to how a hospital achieves higher or lower competency levels. Organizational competency reflects a bundle of skills and technologies; as such, we hypothesize that higher organizational competency may be associated with higher skills and more or better technologies in place. This capital-skill complementarity – or the complementary need for a skilled workforce in order to adopt or implement a technology [5] is a common concept in labor economics. Only one study on California hospitals tested the relationship between EHR adoption and skilled labor. This study found that hospitals in mid-EMRAM stages required more hours from registered nurses, and there was a substitutive effect on lower skilled nurses [6]. To date, no studies have attempted to capture the complex dynamic between EHR, other technologies and skill mix on a national sample; we test whether an environment rich in skilled workers and advanced technologies influence a hospital’s competency to adopt an EHR. In the next section, we describe our data and empirical approach to model this relationship.
Data sources
For this analysis, we examined a sample of 2,274 acute care hospitals (or 45% of all acute care hospitals in the US). We used the 2011 HIMSS Analytics Database, which surveys the health IT capabilities of acute care hospitals in the US in 2010. We merged in the 1980 to 2010 Provider of Services (POS) data, an annual survey with details on workforce and services. The file derives from the Online Survey, Certification, and Reporting (OSCAR) system that tracks the quality of all providers receiving payment from the Centers for Medicare and Medicaid Services (CMS) [7]. Annually states must survey 5% (or at least 1, whichever is greater) of a representative sample of accredited, deemed critical access hospitals and additional hospitals performing poorly (for details on the sampling procedure) [8]. If a large number of complaints were filed to CMS, those hospitals were re-surveyed. The POS is essentially a repeat cross sectional panel without replacement. Over 50% of our acute care hospital sample was surveyed from 2001 to 2010, a third sampled from 1991 to 2000, and the remaining from 1981 to 1990.
Dependent variables
We constructed the EMRAM using the binary answers on 28 different technologies (Appendix A for a detailed list of technologies) using the 2011 definition of the EMRAM, which outlines eight stages of adoption (Appendix B for the definition of the EMRAM stages) [8]. Due to the small sample of hospitals in the higher stages, we collapsed stage 4 through 7 into one category. Hospitals are thus categorized into five different stages of adoption, {0,1,2,3,4}, where a higher number represents a higher level of technology adoption. Following [6], we assumed that adoption begins, on average, one year after the contract date. When this information was missing and the application’s status was “Automated/Live and Operational,” we assumed that the technology was implemented.
We constructed the EMRAM using the binary answers on 28 different technologies (Appendix A for a detailed list of technologies) using the 2011 definition of the EMRAM, which outlines eight stages of adoption (Appendix B for the definition of the EMRAM stages) [8]. Due to the small sample of hospitals in the higher stages, we collapsed stage 4 through 7 into one category. Hospitals are thus categorized into five different stages of adoption, {0,1,2,3,4}, where a higher number represents a higher level of technology adoption. Following [6], we assumed that adoption begins, on average, one year after the contract date. When this information was missing and the application’s status was “Automated/Live and Operational,” we assumed that the technology was implemented.
The exact categorization of the 28 technologies into stages was left to our discretion given that the methodology to create the stages is not transparent in the HIMSS documentation. We saw potential discrepancies in how experts may categorize the information technologies as well as how IT managers may interpret the question about the presence of said technology when answering the survey. This potential discrepancy puts the EMRAM stages at risk for misreporting.
In order to construct an index measuring a hospital’s EHR competency, we start with the same set of 28 different technologies used in the EMRAM. We kept hospitals that reported having all (or none) of the technologies. We applied IRT (described in the Empirical Estimation section) to these items to create a continuous variable that takes on values from negative to positive infinity. Each hospital is assigned a competency value based on their responses to the 28 technologies.
Key independent variables
To test whether other technology and workforce factors are determinants of organizational EHR competency, we constructed a technology index and one workforce education index; we also looked at the proportion of various occupations within the hospital as an alternative to the education index. Using POS data, we constructed a technology index, which is a count of 30 high tech services (an additional 35 services were considered not high tech based on expert opinion; (Appendix C) for list of included and excluded services). Hospitals were categorized as offering the high tech if they provided the service through their staff or answered that they offered the service through a combination of staff and affiliated partners. Hospitals that provided the service through affiliation only were not considered as having the technology. Three technologies were excluded from the high tech index - nuclear medicine, diagnostic radiology, and clinical laboratory – given that nearly every hospital had these items throughout all thirty years of the POS data.
We constructed two alternative sets of measures of workforce skill mix. The first set is the proportion that each major occupational category was out of the entire hospital workforce (full time equivalents). We identified six major occupational categories: 1) nurses, 2) physician assistants, 3) technicians, 4) therapists, and 5) other personnel. Nurses included nurse practitioners (NPs), registered nurses (RNs), certified registered nurse anesthetists, and licensed practical/vocational nurses (LP/VN). Technicians included nuclear medical, diagnostic radiology, and medical laboratory technicians. Therapists included occupational, physical, and respiratory therapists, dieticians, psychologists, speech pathologists/audiologists, and medical social workers. We had an unspecified “other” category of employees; the category is assumed to be mostly administrative in nature when considering the distribution of clinical versus non-clinical staff as calculated by the authors using occupational employment statistics reported by the Bureau of Labor Statistics (BLS). We excluded registered pharmacists, residents, and physicians given that the complex contracting relationship especially for physicians results in inconsistent and unreliable reporting of these occupations. The POS does not separate out occupational categories that directly deal with EHRs; HIMSS Analytics provides IT related occupation data, but not for as many years as what the POS provides.
The second labor measure indexes the average education level of the workforce, where a larger value indicates higher levels of human capital, or alternatively, reflects the sophistication of the workforce. We approximated the total required years of schooling using the occupational description provided by the BLS [9]. We multiplied the years of schooling by the proportion of workers in each occupational category defined above (which serves as a type of educational weight), and then averaged by the total number of workers to account for varying sizes of the hospital. Since we were not able to identify the exact occupations among the “other” employees, we assumed that their education to be equal to the average of all other workers listed within the health care field according to the BLS excluding the identified occupations listed above. Given that the POS data are not individual level data but rather firm level data, we did not have information about the years of experience of each worker in a hospital, which is a typical measure used in conjunction with years of education in a human capital calculation. Thus our approximation of human capital is conservative and serves as a lower bound. We also included a second order polynomial to the education index to allow for nonlinearity in the data.
To investigate the potential substitutive or complementary effect between technology and workforce, we included interaction terms between the high tech index and the workforce variables. We tested interaction terms between the high tech index and time to capture advances in technology over time. We did not test interaction terms between skill mix and time due to lack of variation for some interactions, and we also do not have a strong prior assumption about the change in skill mix over time.
Other independent variables
Found that hospital system membership, payer mix and hospital scale are significantly related to the presence of EHR in a hospital, but strategic behavior, hospital competition, or ownership had little or no effect [10]. One cross sectional study also using HIMSS Analytics data also found system membership increased the likelihood of adoption, but only for small hospitals and not for medium and large hospitals [11]. A similar cross sectional study found the relationship with system membership, but also found that larger and urban hospitals were more likely to adopt EHRs; payer mix did not have an effect [12]. Given these findings, we included variables that might influence a hospital’s EHR competency: total number of beds, for-profit status (versus not-for profit, government owned, and government affiliated), medical school affiliation, and urban (versus rural based on whether a hospital was in a metropolitan statistical area or not). We were not able to capture system affiliation given the limitations of the data.
We were not able to add a measure of competition, but previous studies did not show that competition was a significant predictor of EHR stage; also, it is unlikely to be a predictor competency since our definition of competency is as an internal trait. Similarly, concentration in the health insurance market may also influence technology adoption rate, the insurance market concentration level is unlikely to influence competency. We do not have data on the patient skill mix, which may influence the service offerings by the hospital; however, we assume that experience with patients is not likely to translate directly to competency with EHRs (and if it does, the effect is more likely to be a second order effect).
Study design
Our study is a longitudinal-type analysis. The outcome variables as described above are the hospital’s EMRAM stage and organizational EHR competency in 2010. These are estimated as a function of their most recently available data on their workforce and other technology capabilities collected via POS. Given the structure of POS, the “distance” between the year of the hospital EMRAM stage and organizational EHR competency (2010) and the year of the data available for workforce and other technologies can range between zero and thirty years. We control for this “distance” in time by adding the number of years as an independent variable in the model. Although the sample is not evenly distributed throughout the thirty-year period, the bias is towards the more recent years; given the reporting requirements of POS, hospitals surveyed in the earlier years are generally the better performing hospitals than the more recent hospitals, although the bias is not strong.
Estimating IRT
The first step in our empirical estimation was to create the index of organizational EHR competency. Let θ be the set of competencies that influence adoption (also called the ability parameter). P(θ)i is the probability that a hospital with competency θ has adopted technology i such that:
P(θ)i =exp(Dai (θ–bi)) / (1+exp(Dai (θ–bi))) i=1, 2, … , n,
where n is the number of items (or technologies) measured in the test (or survey), D is the scaling parameter (which is an arbitrary constant set at 1.7), and ai and bi are the item parameters. Specifically, bi is the difficulty parameter for technology i, and ai is the discrimination parameter for technology i.
We estimated the index using the number of “correct” answers (or yeses) to each of the 28 technologies used for the EMRAM stage variable (no other control variables were used) as well as the difficulty of the item. The difficulty parameter was estimated for each technology, and represents the point on the competency scale where the probability of adoption is 50% (or the probability of answering the question “right” or “yes” is 50%). A higher the value of b means that the hospital requires greater competency to adopt the technology. The discrimination parameter is proportional to the slope of the curve at the point bi on the competency scale. Technologies with steeper slopes are more useful for separating hospitals into competency levels.
The parameters are jointly estimated using maximum likelihood. In order to identify the parameters, we assumed that competency is normally distributed with mean 0 and standard deviation 1. We estimated organizational EHR competency using IRT (openirt routine downloadable in STATA SE 11). We then assigned a competency value to each hospital. A higher value means higher competency in EHR adoption.
Estimating EMRAM using generalized ordered probit
The second step of our study was to estimate the influence of organizational factors on: 1) EMRAM stage and 2) EHR competency. The EMRAM stage model is a discrete variable with a specified order of stages; each stage is intended to be mutually exclusive. Using generalized ordered probit (goprobit), we predicted which EMRAM stage a hospital resided as a function of the high tech index, the alternative workforce skill mix variables, the various interaction terms, other hospital characteristics, and year of the POS survey. The marginal effects from the goprobit model tell us the increase or decrease in probability of a hospital residing within that specific stage of EMRAM; the coefficient output does not tell us which stage the hospital is more likely to be in. This statistical procedure also theoretically restricts the findings such that probabilities must go from negative to positive, or positive to negative, as one looks at the results from the lowest to highest stages; bell shaped results – e.g., lower stages are positive, middle stages are negative, and higher stages are positive – are not possible outcomes of goprobit.
We argue (and tested) that the stages violate the parallel regression assumption, which required that the β coefficients above are equal. Heuristically, it is logical that the transition of hospitals from Stage 0 to Stage 1 is not the same as the transition from Stage 1 to Stage 2 or Stage 2 to Stage 3, as the technologies required to be categorized as a higher stage are harder to adopt than those required to be categorized as a lower stage. The Brandt test, which tests for equal coefficients, failed in each of our model specifications, which was not surprising given our earlier discussion. A clear alternative when the Brant test fails is not available, so we performed a generalized ordered probit model. We estimated coefficients for each of the five stages of adoption and report the marginal effects.
Estimating EHR competency using quantile regression
We next estimated the influence of organizational factors on our EHR competency index. We tested its association using ordinary least squares as well as quantile regression with the high tech index, the alternative measures of workforce, the various interaction terms, other hospital characteristics, and year of the POS survey. Quantile regression allows us to identify how the explanatory variables impact competency across the distribution. The coefficients can be easily interpreted as effects at each quantile, as opposed to OLS which tells us about effects only at the mean.
We next estimated the influence of organizational factors on our EHR competency index. We tested its association using ordinary least squares as well as quantile regression with the high tech index, the alternative measures of workforce, the various interaction terms, other hospital characteristics, and year of the POS survey. Quantile regression allows us to identify how the explanatory variables impact competency across the distribution. The coefficients can be easily interpreted as effects at each quantile, as opposed to OLS which tells us about effects only at the mean.
EHR competency index over the EMRAM as the dependent variable is that we are able to take out the inherent endogeneity of using direct measures of EHR presence as a function of other technologies and workforce.
Descriptive results
About a third of our acute care hospitals are located in urban areas and about a third of the sample is affiliated with a medical school; one fifth of the sample is for-profit. The average size of a hospital is 243 beds (median: 185.5 beds) with a wide dispersion (SD: 194 beds). The average value of the high tech index is 11.3 items (median: 9.0 items) with a relatively narrow dispersion (SD: 0.3 items). The average education level is 15.6 years (median: 15.7 years, SD: 0.2 years). One third of the employees in the hospital are nurses and/or PAs (Figure 1). Over 80% of these are RNs. About an equal share (0.2%) of all occupations are PAs and NPs. Therapists are a little under 5% of the sample with the most common type being a respiratory therapist followed by physical therapists. Technicians are 1.9% of all occupation with radiology technicians slightly more common than the other two types of technicians. The majority of employees are unspecified other personnel, most likely who are non-clinical in nature.
Figure 1: Distribution of major occupational groups.
Source: Provider of Services file, 1980-2010
Source data available upon request Note: Nurses includes nurse practitioners, registered nurses, registered nurse anesthetists, licensed practical nurses, and licensed vocational nurses. Technicians include medical, radiology, and nuclear medicine technicians. Therapists include respiratory, physical and occupational therapists, medical social workers, dieticians, speech pathologists, audiologists, and psychologists. Other personnel is a category of unspecified occupations. Excluded are physicians, medical residents, and pharmacists.
A little over a quarter of the hospitals in our sample are in Stage 4 or above, and a little over a third of the hospitals are in Stage 2 of the EMRAM (Table 1). A surprising share of our sample (17.8%) essentially does not have any health IT. Our distribution is similar to Furukawa et al. (2010), although the construction of the EMRAM stages is highly dependent on how experts interpreted the responses to the health IT items in the HIMSS Analytics survey.
Stage | Description | % Hospitals (N=2,274) |
---|---|---|
4 or above | Physician Documentation installed, Order Entry, and Utilization Review (Case Mix Management or Data warehousing/Mining or Outcomes and Quality Management) installed | 32.32% |
3 | Clinical Decision Support System and Computerized Practitioner Order Entry installed | 9.98% |
2 | Nursing/clinical documentation installed; either nursing documentation, patent tracking, acuity, or delivery installed | 41.78% |
1 | Three ancillaries- laboratory, pharmacy, and radiology installed; CDR installed | 7.70% |
0 | Not all ancillaries installed or no CDR | 8.22% |
Source: Health Information and Management Systems Society Analytics Database, 2011
Table 1: Frequency of Acute Care Hospitals by HIMSS EMR adoption model stages in 2010.
Hospitals in lower EMRAM stages are more likely to be forprofit and located in urban areas (Figure 2). Medical school affiliation increases with higher EMRAM stages. Average education consistently increases with higher EMRAM stages although the differences were small. The high tech index also increases with higher EMRAM stages with the exception of Stage 0 having more high tech than Stage 1. The same trend exists for the total number of beds.
Generally, hospitals in the highest EMRAM stages consistently have more educated staff than hospitals in lower EMRAM stages. Hospitals in the lower EMRAM stages tend to have more RNs and LP/VNs and technicians. At the extreme ends of the EMRAM stages, the share of NPs and PAs are the highest. Hospitals in the higher EMRAM stages tend to have more therapists except in Stage 0.
We examined the bivariate trends of our key variables across five year bins in which a hospital reported data in POS. Even though POS reports data with up to a thirty-year lag, we see a nearly identical distribution of hospitals in each EMRAM stage across the five year bins (figure available upon request). The consistent distribution of EMRAM stages across the years assure us that hospitals with older data on workforce and high tech adoption are not considerably different in EHR capacity than hospitals with more recent data. Not surprisingly, hospitals in the earlier years of POS report less high tech compared to later years. The average education among nurses in hospitals increased over time. The likely reason for the increase in nursing education is that the skill mix of nurses tended either towards more educated nurses (e.g., advanced practice nurses) or fewer lower level nurses (e.g., LPN/LVN); the trend is more likely the former given the trends in nursing [13].
IRT and robustness checks
We turn back to the results of testing the robustness of our EHR competency index before presenting our regression results. First, we checked the uni-dimensionality of the data. As mentioned above, unidimensionality requires that there is a dominant ability that drives adoption. To demonstrate this assumption, we plotted the eigenvalues of the correlation matrix of the item responses. The plot clearly indicated a dominant factor (figure available upon request); the first eigenvalue was about three times bigger than the next biggest eigenvalue.
Second, we tested the invariance of the items and the ability parameters (Appendix D). When we split hospitals randomly, item difficulty, and randomly by item, we expected that the ability parameter should fall on the 45 degree line as a signal of invariance. In other words, we did not expect ability to vary by the sample of hospitals, the choice of questions (items) that were asked, or by the difficulty of the item. We saw this pattern for all three splits, which meant that we had the property of invariance in our model; IRT was appropriate to use.
The distribution of hospitals around the difficulty parameter was approximately normally distributed (Appendix E). The distribution around the discrimination parameter was more right skewed (Appendix E); the right skew indicates that we have quite a few low ability hospitals that find some health IT tools particularly difficult to acquire. The distribution around the ability parameter was approximately normally distributed, which was expected.
Although the EMRAM was not explicitly developed to measure competency per se, researchers have attempted to identify the determinants of being in higher EMRAM stages. Also, we assume that despite its lack of robustness, the EMRAM stages do reflect some degree of increasing competency as hospitals move up in stages. As such, we checked the consistency of our EHR competency index distribution with the EMRAM stages. We found that hospitals in the lower EMRAM stage had a lower competency to adopt EHRs compared to hospitals in higher EMRAM stages; the variance in the parameter was relatively similar across stages (Appendix F). The competencies were significantly different from each other by EMRAM stage.
Regression results
A consistent finding across all the regression models is the significant influence of hospital size (measured by total number of beds) and for-profit status. Larger hospitals are more likely to be in the higher EMRAM stages, and experience a positive dose response with EHR competency. For-profit hospitals are less likely to be in higher EMRAM stages and more likely to be in lower EMRAM stages; forprofit hospitals also tend to be less competent in EHR adoption at every part of the distribution, but particularly at both ends of the competency distribution (i.e., a U-shaped trend).
Most of the models show a significant relationship between our two dependent variables and our key variables of interest, high tech and workforce skill mix (Tables 2 and 3). Most of the models did not find a significant relationship between average education of the overall set of employees, or with the subset of nursing education. Only one of the EHR competency model shows a marginally significant and positive relationship between average education of all other employees at the lower end of the competency distribution (Table 3, Model B).
Model A (N=2274) | Stage 0 | Stage 1 | Stage 2 | Stage 3 | Stage 4+ | |||||
---|---|---|---|---|---|---|---|---|---|---|
AvgEduc (#) | 0.3303 | 0.9452 | 5.1103 | -4.201 | -2.1849 | |||||
AvgEduc (#), Squared | -0.0143 | -0.0322 | -0.1594 | 0.139 | 0.067 | |||||
Technology Count (#) | -0.0515 | -0.0289 | 0.388 | ** | 0.0821 | -0.3896 | ** | |||
Interact Technology Count (#) and: | ||||||||||
AvgEduc (#) | 0.0032 | 0.0016 | -0.0249 | ** | -0.0052 | 0.0253 | ** | |||
Beds (#) | -0.0001 | ** | -0.0002 | *** | 0.0001 | 0.0000 | 0.0001 | *** | ||
For-Profit | 0.0306 | * | 0.0737 | *** | 0.219 | *** | -0.0625 | *** | -0.2608 | *** |
Medical School Affiliation | -0.0039 | 0.0097 | -0.0694 | *** | 0.0061 | 0.0575 | ** | |||
Urban | 0.0027 | -0.0114 | 0.0652 | ** | -0.007 | -0.0495 | ** | |||
Year Dummy | -0.001 | -0.0031 | ** | 0.0006 | -0.0003 | 0.0038 | ** | |||
Model B (N=2266) | Stage 0 | Stage 1 | Stage 2 | Stage 3 | Stage 4+ | |||||
Avg Nurse Educ (#) | -0.216 | -0.0847 | 0.8024 | -0.3448 | -0.1568 | |||||
Avg Nurse Educ (#), Squared | 0.004 | -0.0004 | -0.0186 | 0.0097 | 0.0052 | |||||
Avg All Other Educ (#) | -0.6124 | 0.4168 | -0.3923 | -0.2539 | 0.842 | |||||
Avg All Other Educ (#), Squared | 0.0184 | -0.0134 | 0.0118 | 0.0086 | -0.0254 | |||||
Technology Count (#) | -0.1249 | * | -0.1194 | ** | 0.6019 | *** | -0.0451 | -0.3125 | ** | |
Interact Technology Count (#) and: | ||||||||||
Avg All Other Educ (#) | 0 | -0.0001 | -0.0037 | -0.0019 | 0.0056 | |||||
Avg Nurse Educ (#) | 0.0081 | ** | 0.0077 | * | -0.0358 | *** | 0.005 | 0.015 | * | |
Beds (#) | -0.0002 | ** | -0.0002 | *** | 0.0002 | ** | 0.0001 | 0.0001 | ** | |
For-Profit | 0.0383 | ** | 0.0762 | *** | 0.2172 | *** | -0.0712 | *** | -0.2605 | *** |
Medical School Affiliation | 0.0016 | 0.0133 | -0.0714 | ** | 0.0058 | 0.0507 | * | |||
Urban | -0.0031 | -0.0167 | 0.0699 | ** | -0.0142 | -0.0359 | ||||
Year Dummy | -0.0012 | -0.0033 | *** | 0.0005 | -0.0007 | 0.0047 | *** | |||
Model C (N=2274) | Stage 0 | Stage 1 | Stage 2 | Stage 3 | Stage 4+ | |||||
Licensed Practical/Vocational Nurses (%) | 0.4607 | ** | 0.3355 | -1.3676 | *** | 0.5953 | -0.0239 | |||
Registered Nurses (%) | -0.1522 | ** | 0.0041 | 0.0572 | -0.0663 | 0.1573 | ||||
Nurse Practitioners (%) | -1.7764 | 2.5811 | -10.4537 | 17.5766 | *** | -7.9276 | ||||
Physician Assistants (%) | 0.4789 | -2.1310 | 12.5677 | * | -6.7622 | * | -4.1534 | |||
Technicians (%) | 1.3889 | *** | -0.1516 | 0.8028 | -0.9674 | ** | -1.0727 | |||
Therapists (%) | 0.8262 | ** | -0.6888 | * | -0.9041 | ** | -0.2772 | 1.0439 | *** | |
Technology Count | -0.0005 | -0.0065 | ** | -0.0083 | * | 0.0031 | 0.0123 | ** | ||
Interact Technology Count (#) and: | ||||||||||
Licensed Practical/Vocational Nurses (%) | -0.0449 | ** | -0.0249 | 0.2098 | *** | -0.0802 | ** | -0.0597 | ||
Registered Nurses (%) | 0.0208 | *** | 0.0026 | -0.0146 | -0.0004 | -0.0084 | ||||
Nurse Practitioners (%) | 0.1211 | -0.1557 | 0.9089 | * | -1.5562 | *** | 0.6818 | |||
Physician Assistant (%) | -0.0005 | 0.3294 | -1.3147 | ** | 0.5589 | ** | 0.4268 | * | ||
Technician (%) | -0.1079 | *** | 0.0283 | 0.0047 | 0.0635 | ** | 0.0114 | |||
Therapist (%) | -0.0658 | 0.0120 | 0.0918 | *** | 0.0255 | -0.0635 | *** | |||
Beds (#) | -0.0001 | ** | -0.0002 | *** | 0.0002 | * | 0.0001 | 0.0001 | * | |
For-Profit | 0.0343 | ** | 0.0768 | *** | 0.2219 | *** | -0.0706 | *** | -0.2624 | *** |
Medical School Affiliation | -0.0055 | 0.0078 | -0.0635 | ** | 0.0111 | 0.0500 | * | |||
Urban | 0.0028 | -0.0173 | 0.0579 | * | -0.0085 | -0.0349 | ||||
Year Dummy | -0.0005 | -0.0031 | *** | 0.0008 | -0.0010 | 0.0038 | ** |
*p<0.10, **p<0.05, ***p<0.01
Table 2: Generalized ordered probit predicting EMRAM stages, marginal effects.
MODEL A (N=2274) | OLS | Q10 | Q25 | Q50 | Q75 | Q90 | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
AvgEduc (#) | -0.3101 | 16.6983 | -10.3590 | -3.2331 | -14.0413 | -19.8609 | ||||||
AvgEduc (#), Squared | 0.0144 | -0.5300 | 0.3432 | 0.1101 | 0.4581 | 0.6359 | ||||||
Technology Count (#) | -0.5444 | -0.0602 | -0.2683 | -0.3326 | -0.5173 | -1.4524 | ** | |||||
Interact Technology Count (#) and: | ||||||||||||
AvgEduc (#) | 0.0369 | * | 0.0052 | 0.0184 | 0.0230 | 0.0351 | 0.0961 | ** | ||||
Beds (#) | 0.0012 | *** | 0.0008 | *** | 0.0011 | *** | 0.0014 | *** | 0.0015 | *** | 0.0018 | *** |
For-Profit | -0.5841 | *** | -0.4886 | *** | -0.2079 | *** | -0.2921 | *** | -0.5009 | *** | -1.0144 | *** |
Medical School Affiliation | -0.0610 | 0.0196 | 0.0220 | 0.0006 | -0.0941 | -0.1871 | ||||||
Urban | -0.1282 | ** | -0.0790 | -0.0426 | -0.1104 | * | -0.1828 | ** | -0.2222 | * | ||
Year Dummy | 0.0167 | *** | 0.0045 | 0.0072 | ** | 0.0158 | *** | 0.0172 | *** | 0.0176 | ||
Constant | 1.3345 | -132.3829 | 77.4970 | 23.3675 | 107.8771 | 156.1778 | ||||||
MODEL B (N=2266) | OLS | Q10 | Q25 | Q50 | Q75 | Q90 | ||||||
Avg Nurse Educ (#) | -1.2292 | 0.4753 | 0.5553 | -0.0483 | -6.0081 | -13.6881 | ||||||
Avg Nurse Educ (#), Squared | 0.0539 | -0.0085 | -0.0084 | 0.0178 | 0.2143 | 0.4540 | ||||||
Avg All Other Educ (#) | 1.4476 | 3.2868 | * | 1.7072 | -0.6649 | 0.4353 | 4.3743 | |||||
Avg All Other Educ (#) Squared | -0.0411 | -0.0915 | * | -0.0497 | 0.0212 | -0.0128 | -0.1301 | |||||
Technology Count (#) | -0.0811 | 0.3716 | 0.1821 | 0.1450 | -0.1341 | -1.3152 | ||||||
Interact Technology Count (#) and: | ||||||||||||
Avg All Other Educ (#) | 0.0052 | -0.0134 | -0.0023 | 0.0010 | 0.0014 | 0.0351 | ||||||
Avg Nurse Educ (#) | 0.0018 | -0.0091 | -0.0086 | -0.0091 | 0.0090 | 0.0530 | ||||||
Beds (#) | 0.0012 | *** | 0.0009 | *** | 0.0012 | *** | 0.0014 | *** | 0.0016 | *** | 0.0021 | *** |
For-Profit | -0.5959 | *** | -0.4988 | *** | -0.2140 | *** | -0.3121 | *** | -0.5355 | *** | -0.9742 | *** |
Medical School Affiliation | -0.0760 | 0.0101 | 0.0009 | 0.0049 | -0.0544 | -0.3201 | * | |||||
Urban | -0.0794 | -0.0325 | -0.0286 | -0.0528 | -0.1105 | -0.1728 | ||||||
Year Dummy | 0.0191 | *** | 0.0058 | 0.0085 | * | 0.0179 | *** | 0.0215 | ** | 0.0258 | ||
Constant | -0.6331 | -35.2690 | -21.5832 | 1.6667 | 38.5465 | 67.5183 | ||||||
MODEL C (N=2274) | OLS | Q10 | Q25 | Q50 | Q75 | Q90 | ||||||
Licensed Practical/Vocational Nurses (%) | -1.7414 | * | -0.7424 | -1.5949 | * | -1.6908 | ** | -0.9948 | 1.5336 | |||
Registered Nurses (%) | 1.2199 | *** | 0.6994 | 0.9441 | *** | 1.1319 | *** | 1.1164 | 0.8101 | |||
Nurse Practitioners (%) | -9.3508 | 2.5624 | 1.1078 | -3.8626 | -9.7869 | -29.3958 | ||||||
Physician Assistants (%) | -12.1468 | -13.7029 | -15.5135 | -15.0750 | -9.9168 | -18.3989 | ||||||
Technicians (%) | -8.2523 | *** | -8.3959 | *** | -4.8262 | -2.6442 | -8.0743 | ** | -13.0062 | * | ||
Therapists (%) | 1.4623 | -1.4704 | ** | -0.6623 | -0.2997 | 1.2393 | 6.4041 | * | ||||
Technology Count | 0.0602 | *** | 0.0318 | *** | 0.0311 | *** | 0.0465 | *** | 0.0507 | *** | 0.0775 | ** |
Interact Technology Count (#) and: | ||||||||||||
Licensed Practical/Vocational Nurses (%) | 0.0086 | -0.0003 | 0.0562 | 0.0161 | -0.0535 | -0.3015 | ** | |||||
Registered Nurses (%) | -0.1125 | *** | -0.0604 | ** | -0.0845 | *** | -0.0847 | *** | -0.0787 | -0.1023 | ||
Nurse Practitioners (%) | 0.7406 | ** | -0.1252 | -0.2033 | 0.3644 | 0.6756 | 2.6698 | |||||
Physician Assistant (%) | 0.4383 | 0.8542 | * | 0.9538 | 0.7632 | 0.3972 | 0.4760 | |||||
Technician (%) | 0.4330 | ** | 0.3724 | ** | 0.2309 | 0.1113 | 0.3939 | 0.8044 | ||||
Therapist (%) | -0.0952 | 0.1283 | * | 0.0652 | 0.0056 | -0.1031 | -0.3835 | ** | ||||
Beds (#) | 0.0012 | *** | 0.0007 | *** | 0.0011 | *** | 0.0013 | *** | 0.0016 | *** | 0.0018 | *** |
For-Profit | -0.6053 | *** | -0.5216 | *** | -0.2506 | *** | -0.3330 | *** | -0.5191 | *** | -0.9944 | *** |
Medical School Affiliation | -0.0521 | -0.0115 | 0.0111 | 0.0051 | -0.0587 | -0.1929 | ||||||
Urban | -0.0710 | -0.0108 | -0.0251 | -0.0468 | -0.1272 | ** | -0.2494 | * | ||||
Year Dummy | 0.0140 | *** | 0.0026 | 0.0034 | 0.0141 | ** | 0.0107 | 0.0123 | ||||
Constant | -0.1729 | -0.8591 | *** | -0.5667 | *** | -0.3872 | *** | 0.2290 | 0.9049 | ** |
*p<0.10, **p<0.05, ***p<0.01
Table 3: OLS and quantile regression of EHR competency index.
In the first set of regression models – goprobit models using the EMRAM stage variable as the outcome – a hospital with more high tech is 39% less likely to be in the highest EMRAM stage, and about as equally more likely to be in Stage 2 (Table 2, Model A). The trend remains consistent in Model B with the addition of hospitals with more high tech being significantly less likely to have little or no EHR. The high tech coefficients in Model C (i.e., the share of each type of occupation substitutes for average education) change direction for Stage 2, but the effect is small (0.8%) and marginally significant (p<0.10). The direction on the high tech coefficient also reverses for the highest EMRAM stages; although significant (p<0.05), again the effect is small (1.2% increase). Earlier we stated that goprobit theoretically only allows single crossing of the coefficients sign, but empirically multiple crossings could occur; the single crossing property does not hold for our results. While not a fatal flaw and the fluctuations are small, this occurrence suggests that the model is not the best model.
The average education values used in Model A and B reflect a whole host of occupations and thus the insignificant coefficients on education are not too surprising; separating out the specific types of occupations matter since there may be a canceling effect of another occupation. A hospital with 1% more LP/VNs are 46% more likely to be in Stage 0 and 137% less likely to be in Stage 2. A hospital with 1% more RNs are 15% less likely to be in Stage 0, and the share of RNs has no effect on other stages of the EMRAM. More nurse practitioners greatly increases the probability of being in Stage 3, though the results should be interpreted with caution given the small sample of hospitals in Stage 3 and the small share of NPs within these hospitals; similarly, the results for PAs should be treated with caution, though of interest to note is that the effect is in opposite directions. Interestingly, a hospital with more technicians and therapists significantly increases the probability of a hospital having no EHR system. Unlike technicians, more therapists significantly increase the probability of a hospital being in the highest EMRAM stage.
The results in Model C, as well as in Model B, suggest that high tech alone does not predict the EMRAM stage, but rather the interplay of high tech and workforce matter. In nearly all cases, the interaction between the types of employees and high tech significantly tempers the effect on each significant relationship between share of employees and EMRAM stage; the trend likely reflects the employees’ use of both EHRs and high tech. The tempering effect of the interaction term on the workforce effect on EMRAM stage suggests a sharing of the workforce skills across each type of technology. As mentioned earlier, the challenge of using the EMRAM variable as the dependent variable is the potential endogenous relationship between adoption of high tech and EHR especially given our limitation that we do not know the precise order of events.
In (Table 3) we show the results from substituting the hospital EHR competency index as the dependent variable for the EMRAM stage variable. The trends when using the hospital’s EHR competency index as the outcome variable shows some consistency with the EMRAM stage models; the advantages of using the EHR competency index, as mentioned earlier, are that the results are more straightforward to interpret, and that the results capture a different and potentially more interesting relationship between EHRs, high tech and workforce.
In (Table 3 Model A) the first trend of note is that, similar to the EMRAM findings, high tech alone does not significantly increase a hospital’s EHR competency. The exception is that a negative association exists at the high end of the EHR competency distribution. The result makes some intuitive sense given that unless workers transfer their knowledge and skills learned from interacting with the high tech, the high tech equipment alone is not able to influence on EHR. The results from Model A suggest that a transfer of knowledge and skills is more likely to improve EHR competency among more educated workers; however, the effects nearly disappear in Model B when we separate education by nurses versus non-nurses.
(Table 3 Model C) reveals slightly different trends, the most striking of which is the return of a strongly significant effect of technology whereby having more high tech increases the competency level; compared to insignificant effects of high tech in Models A and B, the effect size is very small, although it is interesting to note that the effect is amplified as hospitals move up the competency distribution. The argument that high tech alone has minimal impact on competency still mostly holds. One could scrutinize the various items of the high tech index and argue that some of the items may not be equipment standing on its own but rather reflect a set of skills transferable to EHR competency, which may be where the significance is picked up; we do not explore the influence of each high tech item here given the possible small sample size problem.
The influence of the workforce variables on EHR competency is difficult to directly compare with the EMRAM stage results, but the trends have some resemblance. Generally, a greater share of lower skilled nurses tend to negatively influence hospitals in the lower levels of the EHR competency distribution while a greater share of higher skilled nurses positively influence these hospitals. Specifically, more LP/VNs decreases EHR competency levels in the lower to middle half of the distribution, but more RNs nearly counter the effect. Similarly to the trends in the EMRAM model, a greater share of technicians reduces EHR competency at the extreme ends of the distribution with similar magnitudes with a slightly stronger effect at the lower end of the distribution. A greater share of therapists also reduces EHR competency at the lower end of the distribution, but increases competency at the opposite end of the distribution though marginally.
Consistent with the EMRAM findings, the high tech effect is almost always complemented by significant effects on the interaction terms with the workforce measures, which again suggests the important interplay between technology and workforce. The interaction between LP/VNs and high tech is not significant except at the high end of the distribution whereby a greater share of LP/VNs and more high tech reduces EHR competency. A greater share of RNs and more high tech decreases EHR competency at the lower half of the distribution by similar magnitudes; the effect is small relative to the RNs’ direct impact on competency without the interaction term. A greater share of PAs, technicians, and therapists along with more high tech increases EHR competency at the lowest ends of the distribution; the interaction between NPs and high tech has no impact on EHR competency.
Qualitative research suggests that HIT adoption occurs in stages, involves a set of complex decisions, and requires consideration of the current internal and external environment of the hospital. Our study supports the idea that a tradeoff occurs between EHR adoption, high tech adoption, and workforce; the tradeoff may be the set of services the hospital wants to deliver or the type of patients the hospitals is serving. The fact that most of the hospitals in the lower stages of EMRAM are for-profit, urban and not affiliated with a medical school supports this idea.
All hospitals face budget constraints, and hence difficult tradeoffs. As hospitals consider how to best allocate their labor and capital resources to adopt EHRs, the hospitals face simultaneous decisions about whether to adopt other high tech equipment such as MRI and CT scanners, or perform sophisticated procedures such as cardiac surgery. A complementary skilled workforce is required to implement EHRs and to execute these high tech equipment and procedures, e.g., radiology technicians and surgical technicians. Technologies are typically thought to be cost contributors rather than cost savers [14]. Advanced technologies (e.g. MRI and CT scanners or sophisticated surgeries) may contribute to costs due to the high initial capital investment, and ongoing labor costs for specialists and assistive staff. The cost saving would come when these technologies prevent exacerbation of the health condition or save a life, for example [15-18].
Productive technologies (e.g., IT) also have high initial capital investments and training costs, as well as ongoing labor costs for EHR technicians. Experience from other industries that adopted IT suggests that the savings from IT should outpace these costs of productive technologies as it begins to reduce administrative waste, which could partly be identified by the substitution for labor (e.g., transcribers, clerks). Up to now, no cost savings have been found. But one of the culprits may be the lack of staffing, which has been found to be a considerable barrier to EHR adoption; thirty percent of hospitals cited the lack of available staff with adequate expertise in IT [2]. This barrier could potentially put some hospitals ahead while others behind the competency curve. The HITECH Act dedicated $80 million to educate 45,000 health IT technicians to be hired by health providers for the technical support of implementation and ongoing assistance. The incentives, while a start, are set up with limited, if any, evidence of how health IT and workforce relate.
The results using the EMRAM stage variable suggest that hospitals may be experiencing a tradeoff or “sacrifice” of their high tech to get to the highest level of EHR adoption. The marginal returns from adding more high tech may diminish at the higher stages of EHR adoption. Also these results suggest that hospitals may not have enough skilled nurses on staff to facilitate the adoption process. The results from the EHR competency models strongly suggest that capital-skill complementarity plays a role in determining a hospital’s EHR competency. A hospital may need a greater share of skilled nurses such as RNs on their staff in order to increase EHR competency. But these RNs may be spread too thin or too many other technologies are present, which may prevent them from facilitating the EHR adoption process especially in hospitals at the early stages of the process. A staff with a greater share of LP/ VNs, technicians and therapists is associated with lower levels of EHR competency; but if these staff members work with other high tech, some transferability of skills and knowledge may be occurring that counters this effect especially for hospitals at the early stages of the adoption process.
We only used one year of HIMSS Analytics data, but future work could take advantage of more years of HIMSS Analytics data to do a more complex, dynamic model of workforce, high tech and EHR adoption; the challenge of using multiple years of data is inconsistent set of variables for all the years of our study. Another challenge is that we did not have any information on the exact roles of the staff in the hospital; the trends in our data may be a reflection of a lack of leadership or the lack of critical staff necessary to implement EHRs. Also, we are not able identify the bulk of the other types of personnel in the hospital, which warrants further investigation. Another example is that we are not able to model the order of events regarding adoption of high tech versus EHRs. Also, we are not able to pair occupations with the use of specific high tech items or EHR tools. Our results are at the firm level and may conceal stronger capital-skill complementarity relationships [16-20].
EHR competency index is based on a set list of EHR items; a limitation of our work is that the list assumes that each item builds on the other and does not assume any substitution between the items. Changes to these items will make it difficult to compare the distribution of hospitals in the EMRAM over time. Also, the EMRAM stage model reflects a hospital’s actions while the competency index reflects a hospital’s latent traits; a hospital may not have adopted higher stages of EHR yet, but may have the competency to adopt those higher stages and has not taken those actions yet. The IRT methodology, however, attempts to quantify an underlying trait that should not change as the questions change; future studies could apply the IRT methodology to other survey instruments [21-22]. We would expect that while the competency parameter of the hospital may not be exactly the same, the position of the hospital relative to its peers will remain relatively constant [22-24].
Health IT brings change to the organizational structure and culture of a hospital, it requires significant training time, which may negatively impact productivity. Our study presents evidence linking a hospital’s experience with high tech to the competency to adopt EHRs. We identify a capital-skill complementarity that may lead to a transfer of skills and knowledge that improves EHR competency. The skill mix of the hospital workforce, particularly in the direction of more RNs on staff, are positively associated with a hospital’s EHR competency as long as the RNs are not stretched too thin with other responsibilities to operate other high tech.
Hospitals face difficult tradeoffs in their decision to move up the EMRAM stages. Achieving higher levels may come at the sacrifice of other services that may be core to the hospital’s mission and vision, or at the expense of their bottom line. Future studies need to understand the complex dynamics that lead to investment decisions into EHRs in the first place. Those decisions impact whether a hospital has the competency to adopt EHRs. How competency then translates to savings and better quality of care outcomes remains an open question. This study begins to scratch the surface by modeling the complex dynamic between EHR adoption, high tech adoption, and workforce skill mix decisions.
Our study also contributes to the literature by offering an alternative measure to the EMRAM variable that currently presents endogeneity problems when used in such dynamic models. The continuous nature of the organizational EHR competency index (versus the categorical EMRAM stage variable) lends itself nicely to the idea that EHR adoption may be a continuous process. The alternative measure is not a direct substitute, but rather provides a more meaning measure - EHR competency - that may serve as a useful predictor in future models.
We would like to thank Ms. Neela Kumar for her excellent research assistance. We are grateful to Dr. Jeffrey “Bart” Bingenheimer for enlightening us to the method of Item Response Theory. We thank HIMSS Analytics for providing us data. We also appreciate the feedback from Dr. Eric Barette, discussant to our presentation of an earlier draft at the 4th Biennial Conference of the American Society of Health Economists in Minneapolis, MN in June 2012. Conclusions were presented at the Workshop on Health IT and Economics in Washington, DC in October, 2012. This study was funded by the National Collaborative on Aging while Dr. Frogner was an assistant professor at The George Washington University.