Research Article - (2017) Volume 3, Issue 3
Objective: Given neuroimaging evidences of overlap in the circuitries for decision-making and olfactory processing, we examined the hypothesis that impairment in psychophysical tasks of olfaction would independently predict poor performances on Iowa Gambling Task (IGT), a laboratory task that closely mimics real-life decisionmaking, in a US cohort of HIV-infected (HIV+) individuals. Method: IGT and psychophysical tasks of olfaction were administered to a Washington DC-based cohort of largely African American HIV+ subjects (N=100), and to a small number of demographically-matched non-HIV healthy controls (N=43) from a different study. Constructs of olfactory ability and decision-making were examined through confirmatory factor analysis (CFA). Structural equation models (SEMs) were used to evaluate the validity of the path relationship between these two constructs Result: The 100 HIV+ participants (56% female; 96% African Americans; median age = 48 years) had median CD4 count of 576 cells/μl and median HIV RNA viral load <48 copies per milliliter. Majority of HIV+ participants performed randomly throughout the course of IGT tasks, and failed to demonstrate a learning curve. Confirmatory factor analysis provided support for a unidimensional factor underlying poor performances on IGT. Nomological validity for correlations between olfactory ability and IGT performance was confirmed through SEM. Finally, factor scores of olfactory ability and IGT performance strongly predicted 6 months history of drug use, while olfaction additionally predicted hallucinogen use. Conclusion: This study suggests that combination of simple, office-based tasks of olfaction and decision-making may identify those HIV+ individuals who are more prone to risky decision-making. This finding may have significant clinical, public health value if joint impairments in olfaction and IGT task correlates with more decreased activity in brain regions relevant to decision-making.
Keywords: HIV, Olfaction, Cognitive function, Iowa Gambling Task
Despite widespread knowledge about ways to prevent HIV transmission, a substantial number of HIV-infected (HIV+) individuals struggle with avoiding risky behaviors that lead to HIV transmission [1]. Therefore, investigating the mechanisms underlying risky decision-making may contribute to the development of officebased screening tests that could identify those HIV+ individuals with increased tendencies to continue behaviors that increase the rate of HIV transmission in their communities. This is particularly of acute need in urban-dwelling minority populations, not only because they are disproportionately affected by HIV/AIDS [2], but also because they are substantially underrepresented in the psychology literature. The prefrontal cortex (PFC) and medial orbitofrontal cortex (mOFC) play very important roles in both decision making and high-order olfactory processing [3-5]. These same regions may also be negatively impacted by HIV neurotoxicity, particularly in interaction with illicit drugs, such as cocaine [6]. It is therefore conceivable that functional and structural impairments in these regions in HIV+ individuals could be reflected in decreased performances in clinical tasks of decision making, such as the Iowa Gambling Task (IGT), and in psychophysical tasks of central odor processing. Additionally, demonstrating statistical relationship between olfactory ability and IGT performance constructs will provide nomological validity to an underlying predisposition to both constructs, based on the scientific premise of overlap in their neural circuitry.
HIV risk is intertwined with heightened risk for substance use disorders, high-risk sexual behaviors and sexually transmitted infections [7,8]. Substance use, including injection and non-injection drug use or alcohol use, can affect attitudes, decision processes and motivational aspects of sexual behavior [7-9]. A significant proportion of drug abusers are HIV+, a fact that probably reflects an underlying dysfunction in decision-making processes that is common to drug or alcohol use and the acquisition of HIV. Substance use may increase the likelihood of engaging in behaviors that would increase HIV acquisition and transmission and drive many of the new infections, through inconsistent condom use, having multiple partners, and sharing injection needles. Both use of drug substances, such as cocaine and HIV have also been found to be directly and synergistically pathological to the prefrontal cortex and the dorsal and ventral striatum, regions involved in decision-making [6,10,11].
Although the validity of IGT as a surrogate measure of risky decision making has been questioned by some [12], substantially larger number of studies do, in fact, demonstrate reduced IGT task performance in patients with functional deficits in the mOFC and PFC regions [10,13-16] which are neural structures relevant to decision making. During the IGT, participants are instructed to maximize winnings while choosing repeatedly from four decks of playing cards that unpredictably yield gains and losses. Decks with higher wins result in a net loss long-term, while decks with smaller wins result in a net gain [17]. Conventionally, success on the IGT requires that participants learn over the course of the exercise that two of the decks are ‘disadvantageous,’ resulting in higher immediate gains but higher losses, and two of the decks are ‘advantageous,’ resulting in lower immediate gains but lower losses [18]. Some individuals, notably those with mOFC abnormalities show reduced ability to demonstrate this learning. One analytic approach suitable for modeling learning effect in IGT is the linear growth curve model [19]. This approach models mean performance at the individual level, known as the random effect, and a slope parameter that represents whether an individual’s performance improves or worsens over time. One important limitation of this approach is that in certain populations, performances on IGT are random; that is performance neither improves nor worsens over time. For instance, in our preliminary studies of low-income minority populations living in Washington, DC, performance over time was random; showing no trend in learning (See Supplementary Data). Therefore, fitting linear growth curves to these data resulted in very poor statistical fit and nonsignificant slope (i.e., learning) parameter. Additionally, given that our previous (unpublished) studies in different ethnic populations showed a sizable proportion demonstrating learning in the course of IGT task, this recent observation of predominantly random performance in our sample of ethnic minorities highlights the issue of suboptimal representation of this latter population in previous psychological research.
Based on empirical finding of significant within-individual correlations over time blocks of IGT task from this HIV+ population, this paper therefore focuses on examining the underlying trait or factor that possibly ‘induces’ such abnormal pattern of IGT performance. Additionally, motivated by research evidence that performance on odor identification, odor discrimination and odor memory tasks rely in part on structural and functional integrity of the mOFC [4,20,21], this study examines if poor performance in IGT is correlated with poor performance in central olfactory processing. The overarching hypothesis is that net scores of IGT (i.e., differences between ‘advantageous’ and ‘disadvantageous’ cards) played over 5 trials will be significantly correlated with composite performance in central olfactory tasks, suggesting abnormal functioning in the brain regions that jointly mediate olfaction and decision making. Confirmatory factor analysis (CFA) and structural equation model (SEM) approaches were employed to support the tenability of a nomological relationship between olfactory ability and decision making on IGT. Both CFA and SEM have a rich collection of well-established post-estimation parameters for examining the validity of hypothesized traits, including examination of residual collinearity between measures of such trait [22,23].
Finally, since neurobiological studies of HIV suggest that HIV-1- associated dysregulation of OFC and PFC are further potentiated by psychotropic drugs such as cocaine [11], this study also compared IGT and olfactory performances of HIV+ subjects with and without history of cocaine use and other drugs commonly abused in this population. Results from this set of studies reveal that performance in IGT may be linked to performance in psychophysical tasks of olfaction, thereby strengthening our assumption that olfaction and decision making may be linked by an overlapping circuitry.
Study site and participants
The study sample consisted of adult HIV+ individuals who were receiving care at the Family and Medical Counseling Services, Inc. (FMCS), between 2012 and 2014. This primary care center provides health services to the largely African-American communities living in the South-East district of Washington D.C. Participants were identified through chart reviews by case managers that worked directly with the patients. Interested patients were contacted by study personnel for eligibility screening. Study inclusion criteria included being HIV+ and attending clinic regularly at FMCS. One of the earlier objectives of the parent study was to establish a cohort for a prospective study of HIV-Associated Neurocognitive Disorders. As such, HIV+ patients with dementia, stroke, Parkinson’s disease, multiple sclerosis and traumatic brain injury were excluded. Additionally, HIV+ patients with mental retardation, psychosis, allergies to odorants, current nasal disease or surgery and recent intranasal drug use were excluded. For the purpose of this study, recent intranasal drug use was defined as any snorting of illicit drugs in the past 6 months prior to study enrollment. The rationale for excluding patients with intranasal conditions is the potential for complication by peripheral anosmia, which in turn impact olfactory cognition. In this vein, participants with current cold infection, allergic or infective rhinitis were rescheduled for testing 8 weeks after resolution of their symptoms. Non-HIV subjects (N = 43) from the same geographic locations as the HIV+ subjects, who had been previously recruited as controls for an olfactory molecular study of bipolar disorders were used as comparative group for this study. Recruitment of these controls for the bipolar study has been described elsewhere [24].
Study procedures
The study protocol was approved by the Howard University Institutional Review Board. Each participant signed a written informed consent document after details of the study were explained. They were then asked to respond to questionnaires targeting their background and medical history. The entire study procedure, including psychiatric diagnostic interviews, substance use surveys, cognitive batteries, olfactory assessments and IGT tasks took approximately three to four hours. The participants were compensated for their time, and provided bus tokens for their transportation.
Assessment of performance on IGT tasks
The participants were asked to complete the computerized version of IGT, administered by trained research staff (MMH and NR) with doctorate degrees. During IGT, four virtual decks of cards are presented on a computer screen with instruction to select from these four decks in order to win a sum of money. Participants are told that they will both win and lose money when selecting one card from each of the four decks. The test requires participants to draw one card at a time from any of the four decks in order to maximize their net profit. A loaned cash pile is represented by a bar at the top of the screen, while a lower bar represents the remainder of the borrowed cash left to play the game. The objective of the game is to earn as much money as possible. All four decks are associated with gains and deductions, but some decks are worse than others. The A and B decks produce immediate large gains but are associated with higher penalties due to their larger deductions over the course of the game. As a result, developers of the IGT referred to A and B cards as ‘disadvantageous cards’ [13,25]. The C and D decks produce smaller gains but are associated with fewer penalties due to smaller deductions throughout the game. As a result, developers of the IGT referred to C and D cards as ‘advantageous cards’ [13,25]. Participants are instructed to keep playing until the task ends, when 100 cards are selected. Participants who primarily select cards from decks A and B will end the game with less money due to larger deductions, while participants who learn to select the C and D decks will receive a net gain of money due to smaller deductions. A 100-card selection game is split into 5 blocks. Each block contains twenty cards: block 1 consists of the first 20 cards played; block 2 consists of the next 20 cards selected (i.e., cards 21 to 40); block 3 consists of cards 41 to 60 selected; and so on. At the end of the game, IGT performance was determined by calculating the net scores (Net, the number of C and D cards selected minus the number of A and B cards selected) of each of the 5 blocks. Average scores on the Net 1- Net 5 scores have been published from largely Caucasianbased studies [12]. A large number of studies have demonstrated that individuals who draw from the so-called ‘disadvantageous cards,’ with resultant total net loss, tend to exhibit a decision-making process more focused on rewards of decisions rather than punishments, and display a deficit in learning to make advantageous decisions [13,25,26].
Assessment of olfactory function
Psychophysical testing was performed by using the OLFACTCombo (Osmic Enterprises, Inc.), a flow-dilution olfactometer. This olfactometer delivers different odorants intranasally, with automated computer presentation of battery tasks and choice options for clinical assessment of odor identification, odor memory, odor threshold and odor discrimination. The instrument delivers odorants via two plastic tubes attached to a chamber containing two separate compartments. One plastic tube is attached to a compartment containing n-butanol solution of various concentrations for the detection of odor threshold. Another tube is attached to a second compartment containing 20 different odorants, and is used for odor identification, memory and discrimination.
Odor thresholds were assessed using series of binary dilutions of a 4% butanol solution in light mineral. The dilutions were prepared in a series starting from 4% vol/vol n-butanol solution. The strongest concentration is 4% vol/vol. The next strongest concentration is 2%, followed by 1%, and so on. Thirteen different concentrations of n-butanol solution are used. Two puffs of air are presented in sequence: one contains the odorant at a certain concentration and the other is blank air. The participant’s task is to identify the puff of air with the odorant. Each puff of air is presented for 5 seconds, followed by the second puff 3 seconds later. A series of two odors (real odor and ordinary air) are presented every 10-15 seconds. If the participant correctly identifies the odor, the same odorant concentration is presented for a second time. If the odorant is identified for a second time, then successive odorant will have a lower concentration of n-butanol. Similarly, if the odorant is incorrectly identified then the proceeding odorant will be of a higher concentration of n-butanol. A reversal is triggered if the participant correctly identified two consecutive trials. The test continues until 3 reversals are recorded. The program terminates if the participant is unable to identify 4 presentations of the highest concentration or if 3 reversals of the staircase have been found. To simplify clinical application, negative log transformations of the concentrations are used to derive threshold values: so that the more sensitive nose that detects very low concentrations receive higher odor threshold scores. The threshold is calculated as the mean of last 2 of 3 staircase reversals.
Odor identification was assessed by means of 10 common odorants, namely: menthol, clove, leather, strawberry, lilac, pineapple, smoke, soap, grape and lemon. These odors are presented one at a time every 10 to 20 seconds. Each odorant is presented for 5 seconds or until the participant makes a choice. Following each odor presentation, participants are asked to identify the current odorant from a multiple choice of 4 odorants showed on the computer screen directly in front of them.
Tests of odor memory are conducted 10 minutes after completion of odor identification tasks. During odor memory studies, participants are presented with 20 odorants: 10 old odorants used in the identification tasks in addition to 10 new odorants, namely: banana, garlic, cherry, baby powder, grass, tutti-frutti, peach, chocolate, dirt, and orange. Each participant is then asked to identify the odorants and to indicate if each odorant was presented previously or being presented for the first time.
In the olfactory discrimination test, two puffs of air containing different odorants are presented in sequence. The odors are presented for 5 seconds each. Participants have to identify if the odorants are the same or different odorants. A series of two odorants is presented every 10-15 seconds. The interval between the odorant 1 and odorant 2 lasts about 3-5 seconds. Ten of the following combinations were tested: peach-peach; garlic-menthol; grass-grass; chocolate-chocolate; orange-leather; lemon-lemon; clove-banana; tutti-frutti-smoke; dirtstrawberry; and lilac-lilac.
Assessment of cognitive function
Neurocognitive performance was assessed with the Cambridge Neuropsychological Test Automated Battery (CANTAB), a battery of neuropsychological tests conducted on a touch-screen computer [27-29]. Two tests of the CANTAB battery were used to measure frontal lobe function over test duration of about 20 minutes. Working memory was assessed through the Spatial Working Memory (SWM) module, which requires participants to find a blue token hidden in a series of displayed boxes and use these to fill up an empty column, while not returning to boxes where a blue token has been previously found. The SWM task requires retention and manipulation of visuospatial information and has notable executive function demands. The SWM strategy was our outcome measure of interest. The use of strategy was obtained by the number of times the individual began a new search with a different box for 8-box problems. Hence, the lower the score, the more efficient the strategy used in completing the task [30]. The Stockings of Cambridge (SOC) task examined the participant’s ability to engage in spatial problem solving. Two displays containing threecolored balls on each half of the screen are presented in such a way that the participant copied the pattern shown of the top-half screen. The number of problems solved in minimum moves, which is a record of the number of occasions upon which the subject successfully completes a test problem in the minimum possible number of moves, is a fundamental measure of SOC. It is a succinct expression of overall planning accuracy in SOC [31]. For the purpose of this study, we used the number of correct trials on a minimum of 5 moves as our SOC outcome measure.
Statistical rationale and methods
In a previous pilot study of this population, majority of individuals displayed random performances across the 5 blocks. They didn’t develop a learning curve, marked by increasing net scores from blocks 1 to 5, or demonstrate decreased learning across the blocks (spaghetti plot on Supplementary Figure 1). Unsurprisingly, pilot linear and quadratic growth curve models of the data produced non-significant learning parameter estimates and dismal statistical fit (Supplementary Tables 1 and 2). Instead, net scores from block 1 to block 5 showed significant within-individual correlation, particularly for blocks 2 to 5 (Supplementary Table 3). Lack of congruence between performance scores of the first block and the later blocks have been widely published [32].
Therefore, the construct underlying this observed withinindividual correlations in poor performance across blocks 2-5 of the IGT was modeled through confirmatory factor analyses (CFA) and Structural Equation Modeling (SEM) approach in Stata 13.1 software program [33]. Net 1 is taken as the trial session for participants to learn the risk-reward system of this task. The CFA approach was used to measure the relationship between the construct underlying abnormal performance in IGT across blocks and the observed IGT raw scores. We formed a construct illustrated in Figure 1, where the underlying (latent) trait is represented as a circle and observed variables (Net scores 2-5) are represented as squares. The results of CFA include: the ‘loading’ coefficients, i.e., how much the mean score of each observed variable (e.g. Net 2) would change by a unit increase in the level of the latent trait; the estimated variance of the latent trait; the residual variances of the observed variables (i.e., part of the variation of each observed variable that is not accounted for by the latent trait); and any covariance between two or more of the observed variables. Additionally, CFA is followed by post-estimation tests, including goodness of fit chi-squared (Χ2) test, root mean square error approximation (RMSEA) [34], comparative fit index (CFI) [35], and standardized root mean square residual (SRMR) [36], which are used to examine the tenability of the latent model. For a CFA to be valid, the loading coefficients should be significant, and post-estimation fits should include: goodness-of-fit Χ2 P value >0.05; RMSEA ≤ 0.05; and CFI>95% [22]. Additionally, a low SRMR (<0.1) is desirable. Collinearity of net scores across blocks was examined using the ‘Modification Index’ program in Stata [22,33].
Two-factor CFA was used to assess the relationship between the factor underlying IGT performance and central olfactory ability, both of which were modeled as latent traits in this study. In this case, olfactory ability was indexed by scores on odor identification, odor discrimination and odor ability, while IGT performance or ‘decisionmaking’ was indexed by Net 2 to Net 5 scores. Odor threshold is not included because it is a measure of peripheral olfactory function. The prime use of odor threshold task is to exclude people with peripheral anosmia from conditions such as allergic rhinitis. Post-estimation parameters for the two-factor CFA were determined as described above for the one-factor model determining the validity of IGT construct.
Finally, predicted factor scores for the IGT performance construct and olfactory ability were derived from their respective CFA models [22]. Associations between these factor scores and recent use (i.e., within the last 6 months prior to this study) of commonly abused psychoactive drugs were derived from multiple regression equations, adjusting for differences in age, gender, psychiatric diagnosis and cognitive scores (i.e., SWM and SOC). The p value threshold for statistical inference was set at 0.05.
Participant characteristics
Out of the 125 eligible subjects who were initially approached, 95% (N=119) expressed willingness to participate. Of these 119 participants, the first 100 names on the list were selected for the study, as the budgetary allocation was to study 100 subjects. Study participants’ socio-demographic and clinical characteristics are summarized in Table 1. One hundred HIV+ adults (56% female; 96% African Americans; median age=48 years) completed the IGT task, psychophysical tasks of olfaction, and questionnaires. CD4 count and viral load were available for 55 and 60 subjects, respectively. The median CD4 count was 576 cells/μl (range=57 – 1202 cells/μl) and 44 subjects (80%) had CD4 count level greater than 350 cells/μl. The median HIV RNA viral load was <48 copies per milliliter (range from <20 – 588,030 copies per milliliter). Out of 60 patients with available viral load results, 33 (55%) had undetectable viral loads. Seventy-five patients were on antiretroviral therapy (ART) at the time of the interview; 32 (~43%) of whom selfreported complete adherence in the past year. Percent (%) adherence and viral load were inversely correlated, but with a trend to statistical significance (correlation= -0.236; P=0.07). Forty-four participants (44%) had a high school diploma and only 19 (19%) individuals were employed at the time of the interview. The median (interquartile range, IQR) scores for SWM strategy and SOC problems solved were 37 and 6, respectively; these correspond to functioning at -1.32 and -1.49 z scores (respectively) in the normal distribution of these measures for age group of 40-49 years [27]. Lifetime psychiatric comorbidities include: major depression, 40%; bipolar disorder, 6%; post-traumatic stress disorder (PTSD), 29%; and other forms of anxiety, 43%. Co-morbid conditions of substance use and abuse within the last 6 months prior to this study were reported, including cocaine (70%), hallucinogens (44%), heroin (30%) and heavy drinking of alcohol (16%). Data from the convenient sample of non-HIV subjects, recruited from the same geographic location for a different study, was also included in Table 1. These controls were selected on the basis of absence of mental illness and substance use disorders, with the exception of cannabis use. Therefore, they differ from the HIV subjects on several important demographic and clinical variables.
Characteristics | HIV+ N (100) | % | Controls N (43) | % |
---|---|---|---|---|
Female | 56 | 56 | 26 | 60 |
Employed | 19 | 19 | 17 | 39 |
Married | 7 | 7 | 5 | 12 |
Education | ||||
Less than high school diploma | 31 | 31 | 12 | 28 |
High school diploma | 44 | 44 | 17 | 39 |
More than high school diploma | 25 | 25 | 14 | 33 |
African American | 96 | 96 | 41 | 95 |
Religious affiliation | 89 | 89 | 39 | 91 |
Marijuana use | 79 | 79 | 31 | 72 |
Cocaine use | 70 | 70 | 0 | 0 |
Heroin use | 30 | 30 | 0 | 0 |
Hallucinogen use | 44 | 44 | 0 | 0 |
Heavy drinking‡ | 16 | 16 | 4 | 9 |
Major Depressive Disorder | 40 | 40 | 0 | 0 |
Bipolar Disorders | 6 | 6 | 0 | 0 |
PTSD | 29 | 29 | 0 | 0 |
Anxiety Disorders¶ | 43 | 43 | 0 | 0 |
CD4>350 (cells/µl)§ | 44 | 80 | - | - |
Undetectable viral load† | 33 | 55 | - | - |
ART | 75 | 75 | - | - |
ART adherence (100%) ¥ | 32 | 43 | - | - |
Median | IQR | Median | IQR | |
Age | 48 | 43-55 | 51 | 47-56 |
SOC, problems solved | 6 | 5-7 | 7 | 6-9 |
SWM strategy | 37 | 35-40 | 39 | 36-45 |
Median | IQR | |||
Viral Load (copies/ml)† | <48 | <20-530 | - | - |
CD4 (cells/µl)§ | 576 | 400-788 | - | - |
Note: Abbreviations: %, percent proportion; SD, standard deviation; IQR, interquartile range; PTSD, Post-Traumatic Stress Disorders; SOC, problem solved; Stockings of Cambridge, problems solved in minimum moves; and ART, antiretroviral therapy.
‡ Heavy drinking is missing for one participant.
¶Anxiety disorders include Generalized Anxiety Disorders, Specific Phobias, Obsessive-Compulsive Disorders and Panic Disorders
§ Fifty-five patients had CD4 cell count data within 6 months of study; 80% represents % proportion among those that have CD4 data.
† Sixty patients had viral load data within 6 months of study.
¥ Thirty-three percent represents % proportion of 75 people on ART who take medications 100% of the time.
Table 1: Demographics and clinical characteristics study sample and controls.
Olfactory task and IGT performance outcomes
Distributions of scores on olfactory tests and IGT are summarized in Table 2. Olfactory psychophysical tests were compared to those of (N=15; 60% females; median age=51 years) non-mentally ill HIVnegative participants recruited as controls for our molecular study of bipolar disorders during the same time period, from the same source population as the HIV+ participants. Olfactory task performances among the comparison group were similar to those of our HIV+ study population. However, scores on Net 1 to Net 5 of the IGT from the HIV+ study population were much more reduced compared to scores obtained from the 28 controls.
Characteristics | Median (IQR) | Median (IQR) |
---|---|---|
Olfactory Measures | Study sample (N=100) | Healthy controls (N=15) |
Smell Threshold | 6.5 (4.5 – 8.0) | 6.5 (5 – 7.5) |
Smell Identification | 8.0 (7.0 – 9.0) | 9.0 (7.0 – 10.0) |
Smell Memory | 17.0 (14.0 – 19.0) | 17.0 (15.0 – 19.0) |
Smell Discrimination | 8.0 (7.0 – 9.0) | 7.0 (6.0 – 10.0) |
IGT Measures | Study sample (N=100) | Healthy controls (N=28) |
Net 1 score | -2.0 (-4.0 – 2.0) | 0.0 (-4 – 4) |
Net 2 score | 0.0 (-4.0 – 2.0) | 1.0 (-2 – 4) |
Net 3 score | -2.0 (-6.0 – 2.0) | 2.0 (0 – 4) |
Net 4 score | -2.0 (-8.0 – 2.0) | 2.0 (0 – 9) |
Net 5 score | -2.0 (-8.0 – 2.0) | 3.0 (0 – 10) |
Note: IQR: interquartile range; IGT: Iowa Gambling Task
Table 2: Distribution of scores on olfactory tests and Iowa gambling task in the cohort of HIV-infected patients (n=100) and in a comparative non-HIV sample from the same geographic region.
The supplementary material contains data on the pairwise correlation of IGT scores across Net 1-5. As expected, Net score for performance on the first 20 blocks (i.e., Net score 1) has the weakest paired correlation with the other blocks (Supplementary Table 3). Figure 1 represents the IGT decision-making CFA model, derived from the Net scores 2 to 5 of the IGT task. The coefficients associated with Net 2 to Net 5 represent the standardized loading coefficients. These standardized coefficients can be interpreted as the average amount of points or units by which the score on each respective Net score item changes comparing people who differ by one standard deviation in the decision-making trait level. As represented in Figure 1, one standard deviation difference in the IGT ‘decision-making’ trait level is associated with 0.62, 0.68, 0.74 and 0.66 points differences in the mean scores of Net 2, Net 3, Net 4 and Net 5 blocks, respectively. These coefficients are strong and very significant (P<0.001). Post-estimation results on validity of this CFA model suggest good fit, based on high goodness-of-fit P value (P<0.7), high CFI (1.000) and low RMSEA and SRMR. The measurement of this trait by the observed raw scores is quite reliable (coefficient of reliability ρ=0.81). These same standardized estimates are listed in Table 3, along with the corresponding unstandardized estimates. The relationship between the trait underlying IGT performance and high-order olfactory ability (OLFAC), indexed by odor identification (Iden), discrimination (Dis) and memory (Mem), is illustrated in the two-factor CFA results in Figure 2. All standardized coefficients of OLFAC on Mem, Iden and Dis are also large and significant. The curved arrow between the two latent traits (in circles) represents the correlation between these traits, and shows that there is a moderate correlation (r’=0.28, P<0.02) between the trait underlying IGT performance and olfactory cognition (P<0.02). Post-estimation measures for the two-factor CFA support very good fit. Table 3 also lists these standardized estimates for the two-factor model shown in Figure 2, along with their corresponding unstandardized estimates with associated 95% confidence intervals.
Unstandardized values | Standardized values | ||||
---|---|---|---|---|---|
Coefficient | 95% CI | Coefficient | 95% CI | ||
IGT Factor | |||||
Loadings | |||||
Net 2 | 1.00 (fixed) | - | 0.62 | 0.44, 0.79 | |
Net 3 | 1.39 | 0.81, 1.97 | 0.68 | 0.54, 0.82 | |
Net 4 | 1.61 | 0.95, 2.28 | 0.76 | 0.63, 0.89 | |
Net 5 | 1.51 | 0.79, 2.24 | 0.66 | 0.50, 0.82 | |
IGT-Olfaction | |||||
Loadings | |||||
Net 2 | 1.00 (fixed) | - | 0.61 | 0.44, 0.78 | |
Net 3 | 1.42 | 0.84, 1.99 | 0.68 | 0.55, 0.82 | |
Net 4 | 1.66 | 0.99, 2.33 | 0.77 | 0.65, 0.90 | |
Net 5 | 1.49 | 0.77, 2.20 | 0.64 | 0.48, 0.80 | |
Ident | 1.00 (fixed) | 0.74 | 0.60, 0.89 | ||
Mem | 2.41 | 1.55, 3.27 | 0.93 | 0.78, 1.07 | |
Disc | 0.67 | 0.40, 0.93 | 0.53 | 0.36, 0.69 |
Table 3: Unstandardized and standardized loading coefficients of the IGT decision making factor and the two-factor decision making-olfactory ability models in the HIV-infected cohort.
Association between commonly abused drugs in this population and predicted factor scores of decision-making and olfaction
Results of the multiple regression analysis of predicted scores of decision-making and olfactory ability for each study participant, on recent use of cocaine and heroin and hallucinogens (adjusting for age, gender difference, psychiatric diagnosis, SWM and SOC) are depicted on Table 4. Compared to people without recent cocaine use, the predicted mean score of the IGT decision-making trait is 1.36 points lower (P<0.04) in those that used cocaine within the last 6 months. Also, the mean factor score on IGT performance is 1.61 units lower (P<0.05) for recent heroin users compared to those abstinent from heroin in the last 6 months. No significant difference on predicted factor scores on IGT performance was observed conditional on recent hallucinogen use. The relationship between predicted factor scores of olfactory ability and drug use is also depicted on Table 4. Olfactory ability is: 0.53 units lower in cocaine versus non-cocaine users (P<0.02); 0.76 units lower in heroin users versus non-users (P<0.01); and 0.51 units lower in hallucinogen users versus non-users (P<0.04). Since any likelihood of intranasal use of cocaine or heroin would more likely impact the peripheral olfactory system, we explored for association between odor threshold sensitivity and cocaine or heroin use, and found no significant association (P=0.49 for cocaine and P=0.45 for heroin).
Decision-making | Olfactory cognition | |||||
---|---|---|---|---|---|---|
β | SE | P | β | SE | P | |
Cocaine use | -1.36 | 0.65 | <0.04 | -0.53 | 0.23 | <0.02 |
Heroin use | -1.61 | 0.78 | <0.05 | -0.76 | 0.29 | <0.01 |
Hallucinogen use | -0.62 | 0.65 | <0.40 | -0.51 | 0.25 | <0.04 |
Note: † Regression analysis adjusting for age, sex differences, psychiatric diagnosis and cognitive measures
β, regression coefficient, or the effect of a unit increase in the value of the predictor (cocaine, heroin, or hallucinogen use in the past 6 months) on the value of predicted score in decision-making or olfactory ability, adjusting for differences in sex, psychiatric diagnosis and spatial working memory (SWM) strategy and Stockings of Cambridge (SOC) tasks, SE, standard error of the regression coefficient.P, P-value.
Table 4: Relationship between addictive behaviors and predicted factor scores on decision-making and olfactory ability†.
We evaluated the relationship between olfaction and decisionmaking on IGT in a cohort of predominantly African American HIV+ subjects in Washington DC. Impaired high-order olfactory processing, measured by the simple, inexpensive office test, was associated with poor (or random) performance on IGT, highlighted by inability to develop a positive or negative learning curve over the course of the IGT tasks. We provided sufficient justifications for investigating if a unidimensional latent trait underlies the random performance on IGT tasks in this previously unstudied minority population with HIV; and for investigating if this abnormal IGT performance is related to abnormal high-order olfactory functioning. These results are consistent with previous studies in non-HIV populations, suggesting that central olfactory processing regions may play a role in decision-making in non-HIV [3,37-39]. We extended these findings by demonstrating that olfactory ability predicts IGT performance in HIV+ population, and identified significant inverse relationships between extant use of cocaine, opiates and hallucinogens, and both olfactory and IGT performances in this population. If validated, combined measurement of performance on IGT and olfactory ability could become useful clinical measures of recurrent drug use and other high risk behaviors in HIV+ populations.
In this study, construct validity of the trait underlying impaired learning or ‘decision making’ on IGT, indexed by observed Net scores across blocks, was demonstrated through robust systematic statistical fit in the SEM post-estimation studies. It is possible that perfect fit indices for the unidimensional construct of IGT performance could likely be due to the relatively low degrees of freedom in the measurement model. However, these excellent fit measures were reproduced in the twofactor model relating IGT construct to the olfactory ability construct. This two-factor model has a larger degree of freedom, sufficient to reveal evidence of unstable fit when present. Additionally, (not shown in this study) we explored pairwise correlations of Net scores 2-6 in a fraction of subjects (N=26) who completed an unconventional 6-block trials of IGT, instead of 5. We observed strong pairwise correlations between Net scores for the 6th block and blocks 2-5, which suggests that a larger degree of freedom may not significantly change the validity for a unidimensional factor underlying random performance on IGT in this sample.
Though the relationship between poor IGT performance and real life risky decision making is well-published [13,40-44], we take the cautious approach of defining this construct as simply, poor IGT performance construct rather than ‘risky decision making’ construct, because we believe that a prospective study approach directly linking dimensional scores on this IGT trait to future high-risk behaviors will be required to make more profound connection to risky decision making. The rationale to measure IGT performance along with odor performance is based on the scientific premise of reproducible neuroanatomical evidence for involvement of the medial orbitofrontal cortex (mOFC) in decision making, IGT performance and high-order olfactory function [3,32]. Since ablation of these regions in animals and lesion studies of these regions in humans both produce constellation of poor decisions, impaired olfaction and impaired performance on IGT, it was reasonable to hypothesize that HIV+ individuals with joint poor IGT performance (particularly random performance) and poor olfactory ability may more strongly reflect functional or structural abnormality in the OFC region. This is particularly important in HIV+ individuals, given that HIV is associated with impaired olfaction and with a spectrum of neurocognitive disorders, which are reflective of HIV-associated neurotoxicity in related brain regions in humans [45].
A bi-directional relationship has been observed between the OFC and cocaine use: whereby exposure to cocaine leads to structural reorganization of the OFC and reduced OFC volume has been found to predict development of drug addiction [46-49]. Therefore, while further mechanistic and neuroanatomical studies (neuroimaging in humans) are needed to convincingly prove that joint impairment in olfactory ability and poor IGT performance in our study is due to dysfunctions in the OFC and PFC regions, we presumed that significant statistical correlations between dimensional scores on these two constructs and recurring substance use disorders will provide a measure of external validity for possible existence of frontal lobe problems in this sample. Although, this study is limited in resolving the question whether preexisting frontal lobe problem led to drug use or if drug use caused frontal dysfunctions, the finding of strong statistical relationships between extant drug use and both olfaction and IGT generates further hypothesis for future studies. For instance, prospective studies to determine if joint impairments in central olfaction and this IGT construct are valid predictors of drug relapse in high risk population such as ours, can be tested in future prospective studies.
This study also has other limitations. Only a small proportion of our participants demonstrated learning curve across the blocks. Post hoc power estimations revealed that we will need twice sample size of this study for a reliable estimation of linear and quadratic parameters of an appropriate growth curve model to characterize a learning component. Another limitation is the demographic homogeneity of the study sample, which limits the generalizability of the study findings to other HIV+ populations. On the other hand, by focusing on lowincome African-Americans, who are disproportionately affected by the HIV epidemic in the US, the study increases the relevance of existing neuroHIV literature to the US epidemic [50]. The non-HIV controls for comparison of olfaction, IGT and cognitive scores to our sample are not exactly matched controls. These comparative group are controls for a bipolar genetic study, and as such had stringent eligibility criteria that make them unlikely ideal representative members of the geographic target population. As such, it is improper and unjustifiable to compare HIV+ to these controls statistically in order to compute Z scores of impairments in our study population. Similarly, the exclusion of HIV+ patients with severe neurological diseases, such as stroke, dementia, encephalitis, multiple sclerosis and traumatic brain injury, may be viewed as a minor limitation. This is because, having patients with defined structural brain diseases offers an opportunity to link any observed impairment in IGT or olfactory ability to the brain regions radiologically affected by such pre-existing structural brain diseases. This HIV+ cohort was initially established for the purpose of a prospective study for incident HIV-Associated Neurocognitive Disease (HAND). Hence, as common practice in studies of HAND, HIV+ patients with pre-existing neurological diseases were excluded from the study. However, it is important to note that impairments in olfaction and decision-making are not restricted to people with severe neurological diagnosis, as studies have shown inverse correlation between grey matter volume in the OFC and both decision making and olfaction in people who don’t have neurological diagnoses [4,48].
In this study, we relied on HIV count and immunological data collected as part of routine clinical visits, which unfortunately led to missing laboratory data in a substantial number of patients. Our future approach will include recruitment of both HIV infected and uninfected participants in the same study, and collecting blood for viral and immunological studies at the time of enrollment.
To conclude, in this predominantly African-American cohort of HIV+ individuals, reduced high-order olfactory task performance was associated with impaired performance on IGT tasks, suggestive of possible impairment in decision making. Impairments in both olfactory ability and IGT performance predicted recency of substance use, thereby highlighting a possible clinical utility of combining both tasks for identifying HIV+ subjects with persisting high-risk behaviors and for possibly determining the effect of behavioral interventions in reducing future risk behaviors. If validated in larger cross-sectional and prospective studies, these simple, inexpensive office tasks may have potential clinical, public health and research implications [51,52].
This project has been funded in part with Federal funds from the National Cancer Institute, National Institutes of Health, under Contract No. HHSN261200800001E. Funding support for olfactory testing, and for the enrollment and neuropsychological testing of the control subjects is through USPHS grant MH-091460 (PI, Nwulia). This publication does not necessarily reflect the views of policies of the Department of Health and Human Services, nor does mention of trade names, commercial products, or organizations imply endorsement by the U.S. Government. Disclaimer: Dr. Kapetanovic contributed to this article as part of his official duties while working in the NIMH Intramural Office.
This study is not possible without the foresight and support from Drs. Henry Masur and Maryland Pao of the National Institutes of Health. We are also grateful to the staff at the medical records division of the Family Medical Counseling Service, Inc., notably Ms. Deborah Parris, for making the data extraction possible.