Table of Contents

IN ADDITION TO READING ONLINE, THIS TITLE IS AVAILABLE IN THESE FORMATS:


Subtitle E—Industrial Base Matters. Limitation on availability of funds pending report on agile software development and software operations. An individual exempt from SNAP work requirements by paragraphs b 1 iii or b 1 v of this section because he or she is subject to work requirements under title IV-A or unemployment compensation who fails to comply with a title IV-A or unemployment compensation work requirement will be treated as though he or she failed to comply with SNAP work requirement. Review to determine whether the Armed Forces or coalition partners of the United States violated Federal law or Department of Defense policy while conducting operations in Yemen. Lycopene's absorption is improved with concurrent dietary fat intake.

CFR Toolbox


Full Text of the Announcement. Eligibility Information Section IV. Application and Submission Information Section V. The SBIR program, as established by law, is intended to meet the following goals: Therefore, an agency that wishes to fund an SBIR Phase III project is not required to conduct another competition in order to satisfy those statutory provisions.

Applicants are not required to identify a potential awarding component prior to submission of the application, but may request one on the Assignment Request Form. An informational webinar will occur on Thursday June 29, at 2: Frequently asked questions are available to assist applicants and can answer many basic questions about the program. Other Information for award authorities and regulations.

A support mechanism providing money, property, or both to an eligible entity to carry out an approved project or activity. The number of awards is contingent upon NIH appropriations and the submission of a sufficient number of meritorious applications. The current list of approved topics can be found at https: Please also refer to the appropriate Institute's or Center's topic section to determine whether they will consider applications above these amounts.

Applicants are strongly encouraged to contact NIH program officials prior to submitting any application in excess of the guidelines and early in the application planning process.

In all cases, applicants should propose a budget that is reasonable and appropriate for completion of the research project. According to statutory guidelines, award periods normally may not exceed 6 months for Phase I and 2 years for Phase II. Applicants are encouraged to propose a project duration period that is reasonable and appropriate for completion of the research project.

Only United States small business concerns SBCs are eligible to submit applications for this opportunity. A small business concern is one that, at the time of award of Phase I and Phase II, meets all of the following criteria:.

Is organized for profit, with a place of business located in the United States, which operates primarily within the United States or which makes a significant contribution to the United States economy through payment of taxes or use of American products, materials or labor;.

Is in the legal form of an individual proprietorship, partnership, limited liability company, corporation, joint venture, association, trust or cooperative, except that where the form is a joint venture, there must be less than 50 percent participation by foreign business entities in the joint venture;. Be a joint venture in which each entity to the joint venture must meet the requirements set forth in paragraph 3 i or 3 ii of this section.

Application and Submission Information for additional instructions regarding required application certification. If an Employee Stock Ownership Plan owns all or part of the concern, each stock trustee and plan member is considered an owner. If a trust owns all or part of the concern, each trustee and trust beneficiary is considered an owner.

SBCs must also meet the other regulatory requirements found in 13 C. Business concerns, other than investment companies licensed, or state development companies qualifying under the Small Business Investment Act of , 15 U. Business concerns include, but are not limited to, any individual sole proprietorship partnership, corporation, joint venture, association, or cooperative.

For these companies, the benchmark establishes a minimum number of Phase II awards the company must have received for a given number of Phase I awards received during the 5-year time period in order to be eligible to receive a new Phase I award. This requirement does not apply to companies that have received 20 or fewer Phase I awards over the 5 year period. Companies that apply for a Phase I award and do not meet or exceed the benchmark rate will not be eligible for a Phase I award for a period of one year from the date of the application submission.

The benchmark minimum Transition Rate is 0. This requirement applies to companies that have received more than 15 Phase II awards from all agencies over the past 10 years, excluding the two most recently-completed Fiscal Years. This requirement does not apply to companies that have received 15 or fewer Phase II awards over the 10 year period, excluding the two most recently-completed Fiscal Years. Entities Foreign Institutions are not eligible to apply. Organizations are not eligible to apply.

All registrations must be completed prior to the application being submitted. Registration can take 6 weeks or more, so applicants should begin the registration process as soon as possible.

The NIH Policy on Late Submission of Grant Applications states that failure to complete registrations in advance of a due date is not a valid reason for a late submission. Obtaining an eRA Commons account can take up to 2 weeks. Individuals from underrepresented racial and ethnic groups as well as individuals with disabilities are always encouraged to apply for NIH support. Occasionally, deviations from this requirement may occur.

Applicant organizations may submit more than one application, provided that each application is scientifically distinct.

NIH will not accept similar grant applications with essentially the same research focus from the same applicant organization.

This includes derivative or multiple applications that propose to develop a single product, process, or service that, with non-substantive modifications, can be applied to a variety of purposes. The NIH will not accept duplicate or highly overlapping applications under review at the same time.

This means that the NIH will not accept:. A Federal laboratory, as defined in 15 U. See your administrative office for instructions if you plan to use an institutional system-to-system solution.

Conformance to the requirements in the Application Guide is required and strictly enforced. Applications that are out of compliance with these instructions may be delayed or not accepted for review.

If applicants have previously registered, you are still required to attach proof of registration. Follow these steps listed below to register and attach proof of registration to your application. Changing the file name may cause delays in the processing of your application. SBIR Application Certification for small business concerns majority-owned by multiple venture capital operating companies, hedge funds, or private equity firms. Applicant small business concerns that are majority-owned by multiple venture capital operating companies, hedge funds, or private equity firms e.

Follow the instructions below. Save the certification using the original file name. Do not use the Appendix to circumvent page limits. Overview Information contains information about Key Dates and time.

Applicants are encouraged to submit applications before the due date to ensure they have time to make any application corrections that might be necessary for successful submission. When a submission date falls on a weekend or Federal holiday , the application deadline is automatically extended to the next business day.

Organizations must submit applications to Grants. Applicants are responsible for viewing their application before the due date in the eRA Commons to ensure accurate and successful submission. This initiative is not subject to intergovernmental review. Paper applications will not be accepted.

Applicants must complete all required registrations before the application due date. Eligibility Information contains information about registration.

For assistance with your electronic application or for more information on the electronic submission process, visit Applying Electronically.

If you encounter a system issue beyond your control that threatens your ability to complete the submission process on-time, you must follow the Guidelines for Applicants Experiencing System Issues. See more tips for avoiding common errors. Upon receipt, applications will be evaluated for completeness and compliance with application instructions by the Center for Scientific Review, NIH.

Applications that are incomplete or non-compliant will not be reviewed. Applicants are required to follow the instructions for post-submission materials, as described in the policy -. Only the review criteria described below will be considered in the review process.

As part of the NIH mission , all applications submitted to the NIH in support of biomedical and behavioral research are evaluated for scientific and technical merit through the NIH peer review system.

Reviewers will provide an overall impact score to reflect their assessment of the likelihood for the project to exert a sustained, powerful influence on the research field s involved, in consideration of the following review criteria and additional review criteria as applicable for the project proposed.

Reviewers will consider each of the review criteria below in the determination of scientific merit, and give a separate score for each. An application does not need to be strong in all categories to be judged likely to have major scientific impact. NSDUH data for youths aged 12 to 17 are not presented for to because of design changes in the survey. These design changes preclude direct comparisons of estimates from to with estimates prior to The most prevalent category of misused prescription drugs among youths in was pain relievers.

NSDUH and MTF both collect data on misuse of prescription drugs, but they use somewhat different definitions and questioning strategies. For example, NSDUH defines misuse as use of prescription drugs that were not prescribed for the respondent or use of these drugs only for the experience or feeling they caused; MTF defines misuse as use not under a doctor's orders.

MTF also does not estimate overall prescription drug misuse. For the data on narcotics other than heroin, there was a questionnaire change in the MTF that resulted in increased reporting of misuse of narcotics other than heroin, such that estimates prior to are not strictly comparable with estimates for and beyond.

Both surveys showed lower rates of nonmedical use in compared with rates in to The rate of nonmedical use of pain relievers in in the past year among 12 to 17 year olds in NSDUH was 4. In MTF, the rate for nonmedical use of narcotics other than heroin in the past year was 7. The rates among 12th graders did not differ from to and from to ; see Johnston, O'Malley, Bachman, and Schulenberg for a comparison of rates between and Data for MTF are for "narcotics other than heroin.

Potential reasons for differences from the data for youths are the relatively smaller MTF sample size for young adults and possible bias in the MTF sample due to noncoverage of school dropouts and a low overall response rate; the MTF response rate for young adults is affected by nonresponse by schools, by students in the 12th grade survey, and by young adults in the follow-up mail survey.

Both surveys showed an increase in past month marijuana use among young adults from to from Both surveys showed declines in past month cigarette use between and , with NSDUH showing a decline from Both surveys showed no significant change in rates of past month cigarette use among young adults between and There also was no significant change between and in the rate of current alcohol use among young adults in either survey. Trend data for adults aged 19 to 24 in MTF showed similar rates in to Excluded from the survey are persons with no fixed household address e.

Because the coordinated design enabled estimates to be developed by State in all 50 States plus the District of Columbia, States may be viewed as the first level of stratification and as a variable for reporting estimates.

In , the actual sample sizes in these States ranged from 3, to 3, For the remaining 42 States and the District of Columbia, the target sample size was Sample sizes in these States ranged from to in This approach ensured there was sufficient sample in every State to support State estimation by either direct methods or small area estimation SAE 9 while at the same time providing adequate precision for national estimates.

These regions were contiguous geographic areas designed to yield approximately the same number of interviews. Within each SSR, 48 census tracts were selected with probability proportional to population size. One area segment was selected within each sampled census tract with probability proportional to population size. Eight reserve sample segments per SSR were fielded during the survey year.

Four of these segments were retained from the survey, and four were selected for use in the survey. These sampled segments were allocated equally into four separate samples, one for each 3-month period calendar quarter during the year.

That is, a sample of addresses was selected from two segments in each calendar quarter so that the survey was relatively continuous in the field. In each of the area segments, a listing of all addresses was made, from which a national sample of , addresses was selected.

Of the selected addresses, , were determined to be eligible sample units. In these sample units which can be either households or units within group quarters , sample persons were randomly selected using an automated screening procedure programmed in a handheld computer carried by the interviewers. The number of sample units completing the screening was , Youths aged 12 to 17 years and young adults aged 18 to 25 years were oversampled at this stage, with 12 to 17 year olds sampled at an actual rate of Similarly, persons in age groups 26 or older were sampled at rates of The overall population sampling rates were 0.

Nationwide, 88, persons were selected. In addition, State samples were representative of their respective State populations. The data collection method used in NSDUH involves in-person interviews with sample persons, incorporating procedures to increase respondents' cooperation and willingness to report honestly about their illicit drug use behavior. Confidentiality is stressed in all written and oral communications with potential respondents.

Respondents' names are not collected with the data, and computer-assisted interviewing CAI methods are used to provide a private and confidential setting to complete the interview. Introductory letters are sent to sampled addresses, followed by an interviewer visit. The computer uses the demographic data in a preprogrammed selection algorithm to select zero to two sample persons, depending on the composition of the household.

This selection process is designed to provide the necessary sample sizes for the specified population age groupings. All interviewers carry copies of this letter in Spanish. If the interviewer is not certified bilingual, he or she will use preprinted Spanish cards to attempt to find someone in the household who speaks English and who can serve as the screening respondent or who can translate for the screening respondent.

In households where a language other than Spanish is encountered, another language card is used to attempt to find someone who speaks English to complete the screening. If the sample person prefers to complete the interview in Spanish, a certified bilingual interviewer is sent to the address to conduct the interview. Immediately after the completion of the screener, interviewers attempt to conduct the NSDUH interview with each sample person in the household.

The interviewer requests that the sampled respondent identify a private area in the home to conduct the interview away from other household members. The interview averages about an hour and includes a combination of CAPI computer-assisted personal interviewing, in which the interviewer reads the questions and ACASI audio computer-assisted self-interviewing. The core consists of initial demographic items which are interviewer-administered and self-administered questions pertaining to the use of tobacco, alcohol, marijuana, cocaine, crack cocaine, heroin, hallucinogens, inhalants, pain relievers, tranquilizers, stimulants, and sedatives.

Topics in the remaining noncore self-administered sections include but are not limited to injection drug use, perceived risks of substance use, substance dependence or abuse, arrests, treatment for substance use problems, pregnancy and health care issues, and mental health issues. Noncore demographic questions which are interviewer-administered and follow the ACASI questions address such topics as immigration, current school enrollment, employment and workplace issues, health insurance coverage, and income.

Thus, the interview begins in CAPI mode with the FI reading the questions from the computer screen and entering the respondent's replies into the computer. No personal identifying information about the respondent is captured in the CAI record. Screening and interview data are encrypted while they reside on laptops and mobile computers. Data are transmitted back to RTI on a regular basis using either a direct dial-up connection or the Internet.

All data are encrypted while in transit across dial-up or Internet connections. After the data are transmitted to RTI, certain cases are selected for verification.

For completed interviews, respondents write their home telephone number and mailing address on a quality control form and seal the form in a preaddressed envelope that FIs mail back to RTI.

All contact information is kept completely separate from the answers provided during the screening or interview. Samples of respondents who completed screenings or interviews are randomly selected for verification. These cases are called by telephone interviewers who ask scripted questions designed to determine the accuracy and quality of the data collected. Any cases discovered to have a problem or discrepancy are flagged and routed to a small specialized team of telephone interviewers who recontact respondents for further investigation of the issue s.

Depending on the amount of an FI's work that cannot be verified through telephone verification, including bad telephone numbers e. Field verification involves another FI returning to the sampled DU to verify the accuracy and quality of the data in person. If the verification procedures identify situations in which an FI has falsified data, the FI is terminated.

All cases completed that quarter by the falsifying FI are verified and reworked by the FI conducting the field verification. Any cases completed by the falsifying FI in earlier quarters of the same year are also verified. All cases from earlier quarters identified as falsified or unresolvable are removed and not reworked. Examples of unresolvable cases include those for which verifiers were never able to make contact with a resident of the DU, residents who refused to verify their data, previous residents who had moved, or residents who reported accurate roster data for the DU but did not recall speaking to an FI.

Data that FIs transmit to RTI are processed to create a raw data file in which no logical editing of the data has been done. The raw data file consists of one record for each transmitted interview. Cases are eligible to be treated as final respondents only if they provided data on lifetime use of cigarettes and at least 9 out of 13 of the other substances in the core section of the questionnaire. Even though editing and consistency checks are done by the CAI program during the interview, additional, more complex edits and consistency checks are completed at RTI.

Additionally, statistical imputation is used to replace missing or ambiguous values after editing for some key variables. Analysis weights are created so that estimates will be representative of the target population. Details of the editing, imputation, and weighting procedures for will appear in the NSDUH Methodological Resource Book , which is in process.

With the exception of industry and occupation data, coding of written answers that respondents or interviewers typed was performed at RTI for the NSDUH. Written responses in "OTHER, Specify" data were assigned numeric codes through computer-assisted survey procedures and the use of a secure Web site that allowed for coding and review of the data. The computer-assisted procedures entailed a database check for a given "OTHER, Specify" variable that contained typed entries and the associated numeric codes.

If an exact match was found between the typed response and an entry in the system, the computer-assisted procedures assigned the appropriate numeric code. Typed responses that did not match an existing entry were coded through the Web-based coding system.

Data on the industries in which respondents worked and respondents' occupations were assigned numeric industry and occupation codes by staff at the U. As noted above, the CAI program included checks that alerted respondents or interviewers when an entered answer was inconsistent with a previous answer in a given module. In this way, the inconsistency could be resolved while the interview was in progress.

However, not every inconsistency was resolved during the interview, and the CAI program did not include checks for every possible inconsistency that might have occurred in the data.

For example, if respondents reported that they never used a given drug, the CAI logic skipped them out of all remaining questions about use of that drug. Similarly, respondents were instructed in the prescription psychotherapeutics modules i.

Therefore, if a respondent's only report of lifetime use of a particular type of "prescription" psychotherapeutic drug was for an OTC drug, the respondent was logically inferred never to have been a nonmedical user of the prescription drugs in that psychotherapeutic category. In addition, respondents could report that they were lifetime users of a drug but not provide specific information on when they last used it.

In this situation, a temporary "indefinite" value for the most recent period of use was assigned to the edited recency-of-use variable e. The editing procedures for key drug use variables also involved identifying inconsistencies between related variables so that these inconsistencies could be resolved through statistical imputation. In this example, the inconsistent period of most recent use was replaced with an "indefinite" value, and the inconsistent age at first use was replaced with a missing data code.

These indefinite or missing values were subsequently imputed through statistical procedures to yield consistent data for the related measures, as discussed in the next section. For some key variables that still had missing or ambiguous values after editing, statistical imputation was used to replace these values with appropriate response codes.

In this case, the imputation procedure assigns a value for when the respondent last used the drug e. Similarly, if a response is completely missing, the imputation procedures replace missing values with nonmissing ones. PMN allows for the following: The PMN method has some similarity with the predictive mean matching method of Rubin except that, for the donor records, Rubin used the observed variable value not the predictive mean to compute the distance function.

Also, the well-known method of nearest neighbor imputation is similar to PMN, except that the distance function is in terms of the original predictor variables and often requires somewhat arbitrary scaling of discrete variables.

PMN is a combination of a model-assisted imputation methodology and a random nearest neighbor hot-deck procedure. The hot-deck procedure within the PMN method ensures that missing values are imputed to be consistent with nonmissing values for other variables. Whenever feasible, the imputation of variables using PMN is multivariate, in which imputation is accomplished on several response variables at once.

In the modeling stage of PMN, the model chosen depends on the nature of the response variable. In the NSDUH, the models included binomial logistic regression, multinomial logistic regression, Poisson regression, time-to-event survival regression, and ordinary linear regression, where the models incorporated the sampling design weights.

In general, hot-deck imputation replaces an item nonresponse missing or ambiguous value with a recorded response that is donated from a "similar" respondent who has nonmissing data. For random nearest neighbor hot-deck imputation, the missing or ambiguous value is replaced by a responding value from a donor randomly selected from a set of potential donors.

Potential donors are those defined to be "close" to the unit with the missing or ambiguous value according to a predefined function called a distance metric. In the hot-deck procedure of PMN, the set of candidate donors the "neighborhood" consists of respondents with complete data who have a predicted mean close to that of the item nonrespondent. The predicted means are computed both for respondents with and without missing data, which differs from Rubin's method where predicted means are not computed for the donor respondent Rubin, In the univariate case where only one variable is imputed using PMN , the neighborhood of potential donors is determined by calculating the relative distance between the predicted mean for an item nonrespondent and the predicted mean for each potential donor, then choosing those means defined by the distance metric.

The pool of donors is restricted further to satisfy logical constraints whenever necessary e. Whenever possible, missing or ambiguous values for more than one response variable are considered together. In this multivariate case, the distance metric is a Mahalanobis distance, which takes into account the correlation between variables Manly, , rather than a Euclidean distance. The Euclidean distance is the square root of the sum of squared differences between each element of the predictive mean vector for the respondent and the predictive mean vector for the nonrespondent.

The Mahalanobis distance standardizes the Euclidean distance by the variance-covariance matrix, which is appropriate for random variables that are correlated or have heterogeneous variances. Whether the imputation is univariate or multivariate, only missing or ambiguous values are replaced, and donors are restricted to be logically consistent with the response variables that are not missing.

Furthermore, donors are restricted to satisfy "likeness constraints" whenever possible. That is, donors are required to have the same values for variables highly correlated with the response. For example, donors for the age at first use variable are required to be of the same age as recipients, if at all possible. If no donors are available who meet these conditions, these likeness constraints can be loosened.

Although statistical imputation could not proceed separately within each State due to insufficient pools of donors, information about each respondent's State of residence was incorporated in the modeling and hot-deck steps. For most drugs, respondents were separated into three "State usage" categories as follows: This categorical "State rank" variable was used as one set of covariates in the imputation models. In addition, eligible donors for each item nonrespondent were restricted to be of the same State usage category i.

Variables for measures that are highly sensitive or that may not be known to younger respondents e. In addition, certain variables that are subject to a greater number of skip patterns and consistency checks e. The general approach to developing and calibrating analysis weights involved developing design-based weights as the product of the inverse of the selection probabilities at each selection stage. Since , NSDUH has used a four-stage sample selection scheme in which an extra selection stage of census tracts was added before the selection of a segment.

Thus, the design-based weights, , incorporate an extra layer of sampling selection to reflect the sample design change. Adjustment factors, , then were applied to the design-based weights to adjust for nonresponse, to poststratify to known population control totals, and to control for extreme weights when necessary.

In view of the importance of State-level estimates with the State design, it was necessary to control for a much larger number of known population totals. Several other modifications to the general weight adjustment strategy that had been used in past surveys also were implemented for the first time beginning with the CAI sample.

Weight adjustments were based on a generalization of Deville and Särndal's logit model. The final weights minimize the distance function defined as. Every effort was made to include as many relevant State-specific covariates typically defined by demographic domains within States as possible in the multivariate models used to calibrate the weights nonresponse adjustment and poststratification steps. Because further subdivision of State samples by demographic covariates often produced small cell sample sizes, it was not possible to retain all State-specific covariates even after meaningful collapsing of covariate categories and still estimate the necessary model parameters with reasonable precision.

Therefore, a hierarchical structure was used in grouping States with covariates defined at the national level, at the census division level within the Nation, at the State group within the census division, and, whenever possible, at the State level. Census Bureau has produced the necessary population estimates for the same year as each NSDUH survey in response to a special request. This shift to the census data for the NSDUH could have affected comparisons between substance use estimates in and onward and those from prior years.

Consistent with the surveys from onward, control of extreme weights through separate bounds for adjustment factors was incorporated into the GEM calibration processes for both nonresponse and poststratification. This is unlike the traditional method of winsorization in which extreme weights are truncated at prespecified levels and the trimmed portions of weights are distributed to the nontruncated cases.

In GEM, it is possible to set bounds around the prespecified levels for extreme weights. Then the calibration process provides an objective way of deciding the extent of adjustment or truncation within the specified bounds.

A step was included to poststratify the household-level weights to obtain census-consistent estimates based on the household rosters from all screened households. An additional step poststratified the selected person sample to conform to the adjusted roster estimates. The respondent poststratification step poststratified the respondent person sample to external census data defined within the State whenever possible, as discussed above.

The person-level weights for estimates based on the annual averages were obtained by dividing the analysis weights for the 2 specific years by a factor of 2. The estimates of drug use prevalence from the National Survey on Drug Use and Health NSDUH are designed to describe the target population of the survey—the civilian, noninstitutionalized population aged 12 or older living in the United States.

However, it excludes some small subpopulations that may have very different drug use patterns. For example, the survey excludes active military personnel, who have been shown to have significantly lower rates of illicit drug use.

The survey also excludes two groups that have been shown to have higher rates of illicit drug use: Readers are reminded to consider the exclusion of these subpopulations when interpreting results. This report includes national estimates that were drawn from a set of tables referred to as "detailed tables" that are available at http: The final, nonresponse-adjusted, and poststratified analysis weights were used in SUDAAN to compute unbiased design-based drug use estimates.

The sampling error of an estimate is the error caused by the selection of a sample instead of conducting a census of the population. The use of probability sampling methods in NSDUH allows estimation of sampling error from the survey data. The SEs are used to identify unreliable estimates and to test for the statistical significance of differences between estimates. Estimates of means or proportions, , such as drug use prevalence estimates for a domain d , can be expressed as a ratio estimate:.

When the domain size, , is free of sampling error, an estimate of the SE for the total number of substance users is. This approach is theoretically correct when the domain size estimates, , are among those forced to match their respective U. Census Bureau population estimates through the weight calibration process. In addition, more detailed information about the weighting procedures for will appear in the NSDUH Methodological Resource Book , which is in process.

For estimated domain totals, , where is not fixed i. Census Bureau population estimates , this formulation still may provide a good approximation if it can be assumed that the sampling variation in is negligible relative to the sampling variation in. This is a reasonable assumption for many cases in this study.

For some subsets of domain estimates, the above approach can yield an underestimate of the SE of the total when was subject to considerable variation. Because of this underestimation, alternatives for estimating SEs of totals were implemented. Since the NSDUH report, a "mixed" method approach has been implemented for all detailed tables to improve the accuracy of SEs and to better reflect the effects of poststratification on the variance of total estimates.

This approach assigns the methods of SE calculation to domains i. The set of domains considered controlled i. Domains consisting of three-way interactions may be controlled in a single year but not necessarily in preceding or subsequent years.

As a result of the use of this mixed-method approach, the SEs for the total estimates within many detailed tables were calculated differently from those in NSDUH reports prior to the report.

However, the list does include all of the domains that were used in computing SEs for estimates produced in this report and in the detailed tables. This table includes both the main effects and two-way interactions and may be used to identify the method of SE calculation employed for estimates of totals. Estimates among the total population age main effect , males and females age by gender interaction , and Hispanics and non-Hispanics age by Hispanic origin interaction were treated as controlled in this table, and the formula above was used to calculate the SEs.

Estimates presented in this report for racial groups are for non-Hispanics. However, published estimates for whites by age group in this report and in the detailed tables actually represent a three-way interaction: The criteria used to define unreliability of direct estimates from NSDUH are based on the prevalence for proportion estimates , relative standard error RSE defined as the ratio of the SE over the estimate , nominal actual sample size, and effective sample size for each estimate.

Proportion estimates , or rates, within the range [ ], and the corresponding estimated numbers of users were suppressed if. Using a first-order Taylor series approximation to estimate and , the following equation was derived and used for computational purposes when applying a suppression rule dependent on effective sample size:.

Using the minimum effective sample size for the suppression rule would mean that estimates of between. To simplify requirements and maintain a conservative suppression rule, estimates of between. Beginning with the survey, the suppression rule for proportions based on described previously replaced a rule in which data were suppressed whenever. This rule was changed because the rule prior to imposed a very stringent application for suppressing estimates when is small but imposed a very lax application for large.

The new rule ensured a more uniformly stringent application across the whole range of i. The previous rule also was asymmetric in the sense that suppression only occurred in terms of. That is, there was no complementary rule for , which the current NSDUH suppression criteria for proportions take into account.

Estimates of totals were suppressed if the corresponding prevalence rates were suppressed. Estimates of means that are not bounded between 0 and 1 e. This rule was based on an empirical examination of the estimates of mean age of first use and their SEs for various empirical sample sizes. Although arbitrary, a sample size of 10 appeared to provide sufficient precision and still allow reporting by year of first use for many substances.

This section describes the methods used to compare prevalence estimates in this report. Customarily, the observed difference between estimates is evaluated in terms of its statistical significance. Statistical significance is based on the p value of the test statistic and refers to the probability that a difference as large as that observed would occur because of random variability in the estimates if there were no difference in the prevalence estimates for the population groups being compared.

The significance of observed differences in this report is reported at the. When comparing prevalence estimates, the null hypothesis no difference between prevalence estimates was tested against the alternative hypothesis there is a difference in prevalence estimates using the standard difference in proportions test expressed as. In cases where significance tests between years were performed, the prevalence estimate from the earlier year becomes the first estimate, and the prevalence estimate from the later year becomes the second estimate e.

Under the null hypothesis, Z is asymptotically distributed as a standard normal random variable. Therefore, calculated values of Z can be referred to the unit normal distribution to determine the corresponding probability level i.

A similar procedure and formula for Z were used for estimated totals. When comparing population subgroups across three or more levels of a categorical variable, log-linear chi-square tests of independence of the subgroups and the prevalence variables were conducted using SUDAAN in order to first control the error level for multiple comparisons.

If Shah's Wald F test transformed from the standard Wald chi-square indicated overall significant differences, the significance of each particular pairwise comparison of interest was tested using SUDAAN analytic procedures to properly account for the sample design RTI International, Using the published estimates and SEs to perform independent t tests for the difference of proportions usually will provide the same results as tests performed in SUDAAN.

However, where the significance level is borderline, results may differ for two reasons: A caution in interpreting trends in totals e. The estimate for was determined to be affected by large analysis weights for a small number of heroin users and suggests that the estimated numbers of past year and past month heroin users in were statistical anomalies.

This finding also underscores the importance of reviewing trends across a larger range of years especially for outcome measures that correspond to a relatively small proportion of the total population e. The analyses focused on prevalence estimates for 8th and 10th graders and prevalence estimates for young adults aged 19 to 24 for through Estimates for the 8th and 10th grade students were calculated using MTF data as the simple average of the 8th and 10th grade estimates.

Estimates for young adults aged 19 to 24 were calculated using MTF data as the simple average of three modal age groups: Published results were not available from NIDA for significant differences in prevalence estimates between years for these subgroups, so testing was performed using information that was available.

For the 8th and 10th grade average estimates, tests of differences were performed between and the 11 prior years. Estimates for persons in grade 8 and grade 10 were considered independent, simplifying the calculation of variances for the combined grades. Across years, the estimates for involved samples independent of those in to Design effects published in Johnston et al.

For the to year-old age group, tests of differences were done assuming independent samples between years an odd number of years apart because two distinct cohorts a year apart were monitored longitudinally at 2-year intervals. This is appropriate for comparisons of , , , , , and data with data.

However, this assumption results in conservative tests for comparisons of , , , , and data with data because testing did not take into account covariances associated with repeated observations from the longitudinal samples. Estimates of covariances were not available. This discussion also includes variance estimation in the MTF data for testing between adjacent survey years.

The accuracy of survey estimates can be affected by nonresponse, coding errors, computer processing errors, errors in the sampling frame, reporting errors, and other errors not due to sampling.

These types of "nonsampling errors" and their impact are reduced through data editing, statistical adjustments for nonresponse, close monitoring and periodic retraining of interviewers, and improvement in quality control procedures. Although these types of errors often can be much larger than sampling errors, measurement of most of these errors is difficult.

However, some indication of the effects of some types of these errors can be obtained through proxy measures, such as response rates, and from other research studies. Of the , eligible households sampled for the NSDUH, , were screened successfully, for a weighted screening response rate of To be considered a completed interview, a respondent must provide enough data to pass the usable case rule.

A total of 15, sample persons Among demographic subgroups, the weighted IRR was higher among 12 to 17 year olds The overall weighted response rate, defined as the product of the weighted screening response rate and weighted interview response rate or.

Nonresponse bias can be expressed as the product of the nonresponse rate and the difference between the characteristic of interest between respondents and nonrespondents in the population. By maximizing NSDUH response rates, it is hoped that the bias due to the difference between the estimates from respondents and nonrespondents is minimized.

Drug use surveys are particularly vulnerable to nonresponse because of the difficult nature of accessing heavy drug users. However, in a study that matched census data to NHSDA nonrespondents, 15 it was found that populations with low response rates did not always have high drug use rates.

For example, although some populations were found to have low response rates and high drug use rates e. Among survey participants, item response rates were generally very high for most drug use items. However, respondents could give inconclusive or inconsistent information about whether they ever used a given drug i. In addition, respondents could give inconsistent responses to items such as when they first used a drug compared with their most recent use of a drug.

These missing or inconsistent responses first are resolved where possible through a logical editing process. Additionally, missing or inconsistent responses are imputed using statistical methodology. These imputation procedures in NSDUH are based on responses to multiple questions, so that the maximum amount of information is used in determining whether a respondent is classified as a user or nonuser, and if the respondent is classified as a user, whether the respondent is classified as having used in the past year or the past month.

For example, ambiguous data on the most recent use of cocaine are statistically imputed based on a respondent's data for use or most recent use of tobacco products, alcohol, inhalants, marijuana, hallucinogens, and nonmedical use of prescription psychotherapeutic drugs.

Nevertheless, editing and imputation of missing responses are potential sources of measurement error. The reliability of the responses was assessed by comparing the responses of the first interview with the responses from the reinterview. This section summarizes the results for the reliability of selected variables related to substance use and demographic characteristics. The kappa values for the lifetime and past year substance use variables marijuana use, alcohol use, and cigarette use all showed almost perfect response consistency, ranging from 0.

The value obtained for the substance dependence or abuse measure in the past year showed substantial agreement 0. The variables for age at first use of marijuana and perceived great risk of smoking marijuana once a month showed substantial agreement 0. The demographic variables showed almost perfect agreement, ranging from 0. For further information on the reliability of a wide range of measures contained in NSDUH, see the complete methodology report Chromy et al. Most substance use prevalence estimates, including those produced for NSDUH, are based on self-reports of use.

Although studies generally have supported the validity of self-report data, it is well documented that these data may be biased underreported or overreported. The bias varies by several factors, including the mode of administration, the setting, the population under investigation, and the type of drug Aquilino, ; Brener et al. NSDUH utilizes widely accepted methodological practices for increasing the accuracy of self-reports, such as encouraging privacy through audio computer-assisted self-interviewing ACASI and providing assurances that individual responses will remain confidential.

Various procedures have been used to validate self-report data, such as biological specimens e. However, these procedures often are impractical or too costly for general population epidemiological studies SRNT Subcommittee on Biochemical Verification, However, there were some reporting differences in either direction, with some respondents not reporting use but testing positive, and some reporting use but testing negative.

Technical and statistical problems related to the hair tests precluded presenting comparisons of self-reports and hair test results, while small sample sizes for self-reports and positive urine test results for opiates and stimulants precluded drawing conclusions about the validity of self-reports of these drugs. Further, inexactness in the window of detection for drugs in biological specimens and biological factors affecting the window of detection could account for some inconsistency between self-reports and urine test results.

These errors resulted from fraudulent cases submitted by field interviewers and affected the data for Pennsylvania to and Maryland and Although all fraudulent interview cases were removed from the data files, the affected screening cases were not removed because they were part of the assigned sample. Instead, these screening cases were assigned a final screening code of 39 "Fraudulent Case" and treated as incomplete with unknown eligibility. The screening eligibility status for these cases then was imputed.

The cases that were imputed to be ineligible did not contribute to the weights and were reported as "Other, Ineligible" in the affected years. However, some estimates for to in the national findings report and the detailed tables, as well as other new reports, may differ from corresponding estimates found in some previous reports or tables.

These errors had minimal impact on the national estimates and no effect on direct estimates for the other 48 States and the District of Columbia. In reports where model-based small area estimation techniques are used, estimates for all States may be affected, even though the errors were concentrated in only two States. In reports that do not use model-based estimates, the only estimates appreciably affected are estimates for Pennsylvania, Maryland, the mid-Atlantic division, and the Northeast region.

The national findings report and detailed tables do not include State-level or model-based estimates. However, they do include estimates for the mid-Atlantic division and the Northeast region. Single-year estimates based on to data and estimates based on pooled data including any of these years may differ from previously published estimates. Tables and estimates based only on data since are unaffected by these data errors. Caution is advised when comparing data from older reports with data from more recent reports that are based on corrected data files.

As discussed previously, comparisons of estimates for Pennsylvania, Maryland, the mid-Atlantic division, and the Northeast region are of most concern, while comparisons of national data or data for other States and regions are essentially still valid. In particular, CBHSQ has released a set of modified detailed tables that include revised to estimates for the mid-Atlantic division and the Northeast region for certain key measures.

CBHSQ does not recommend making comparisons between unrevised to estimates and estimates based on data for and subsequent years for the geographic areas of greatest concern.

In epidemiological studies, incidence is defined as the number of new cases of a disease occurring within a specific period of time. Similarly, in substance use studies, incidence refers to the first use of a particular substance. This measure is determined by self-reported past year use, age at first use, year and month of recent new use, and the interview date. Since , the survey questionnaire has allowed for collection of year and month of first use for recent initiates i.

Month, day, and year of birth also are obtained directly or are imputed for item nonrespondents as part of the data postprocessing. Additionally, the computer-assisted interviewing CAI instrument records and provides the date of the interview. By imputing a day of first use within the year and month of first use, a specific date of first use can be used for estimation purposes.

Past year initiation among persons using a substance in the past year can be viewed as an indicator variable defined as follows:. The total number of past year initiates can be used in the estimation of different percentages. The detailed tables show all three of these percentages. Calculation of estimates of past year initiation do not take into account whether a respondent initiated substance use while a resident of the United States.

This method of calculation allows for direct comparability with other standard measures of substance use because the populations of interest for the measures will be the same i. One important note for incidence estimates is the relationship between main categories and subcategories of substances e. For most measures of substance use, any member of a subcategory is by necessity a member of the main category e. However, this is not the case with regard to incidence statistics.

Because an individual can only be an initiate of a particular substance category main or sub a single time, a respondent with lifetime use of multiple substances may not, by necessity, be included as a past year initiate of a main category, even if he or she were a past year initiate for a particular subcategory because his or her first initiation of other substances within the main category could have occurred earlier. In addition to estimates of the number of persons initiating use of a substance in the past year, estimates of the mean age of past year initiates of these substances are computed.

Unless specified otherwise, estimates of the mean age at initiation in the past 12 months have been restricted to persons aged 12 to 49 so that the mean age estimates reported are not influenced by those few respondents who were past year initiates and were aged 50 or older. As a measure of central tendency, means are influenced heavily by the presence of extreme values in the data, and this constraint should increase the utility of these results to health researchers and analysts by providing a better picture of the substance use initiation behaviors among the civilian, noninstitutionalized population in the United States.

This constraint was applied only to estimates of mean age at first use and does not affect estimates of the numbers of new users or the incidence rates. Although past year initiates aged 26 to 49 are assumed not to be as likely as past year initiates aged 50 or older to influence mean ages at first use, caution still is advised in interpreting trends in these means.

Consequently, review of substance initiation trends across a larger range of years is especially advised for this age group. The estimated number of past year marijuana initiates aged 26 to 49 in was not significantly different from the numbers in to Except for , the estimated numbers of past year marijuana initiates in this age group since were not significantly different from the number in Since , only the mean age at first use of marijuana in The mean age at first use for any illicit drug among past year initiates aged 26 to 49 in Again, these findings indicate the importance of examining substance initiation trends across a larger range of years for this age group.

Except for the differences that were indicated, trends in the mean age at initiation for marijuana and any illicit drug among initiates aged 26 to 49 have been fairly stable since Similarly, the mean age at first use of inhalants among past year initiates aged 12 to 49 was higher in than in In comparison, the median ages at first use for inhalants, which are less susceptible to the influence of extreme values, were 18 years for past year initiates aged 12 to 49 in and 16 years for those in Thus, the higher mean in could be explained by the effect of extreme values on the age at first use in This finding also underscores the importance of reviewing mean ages at first use across a larger range of years.

Anomalous 1-year shifts in the mean age at first use typically "correct" themselves with 1 or 2 additional years of data. Because NSDUH is a survey of persons aged 12 years old or older at the time of the interview, younger individuals in the sample dwelling units are not eligible for selection into the NSDUH sample. Some of these younger persons may have initiated substance use during the past year. As a result, past year initiate estimates suffer from undercoverage if a reader assumes that these estimates reflect all initial users instead of reflecting only those above the age of For earlier years, data can be obtained retrospectively based on the age at and date of first use.

As an example, persons who were 12 years old on the date of their interview in the survey may report having initiated use of cigarettes between 1 and 2 years ago; these persons would have been past year initiates reported in the survey had persons who were 11 years old on the date of the interview been allowed to participate in the survey. Similarly, estimates of past year use by younger persons age 10 or younger can be derived from the current survey, but they apply to initiation in prior years and not the survey year.

To get an impression of the potential undercoverage in the current year, reports of substance use initiation reported by persons aged 12 or older were estimated for the years in which these persons would have been 1 to 11 years younger. These estimates do not necessarily reflect behavior by persons 1 to 11 years younger in the current survey. Instead, the data for the 11 year olds reflect initiation in the year prior to the current survey, the data for the 10 year olds reflect behavior between the 12th and 23rd months prior to this year's survey, and so on.

A very rough way to adjust for the difference in the years that the estimate pertains to without considering changes in the population is to apply an adjustment factor to each age-based estimate of past year initiates. This adjustment factor can be based on a ratio of lifetime users aged 12 to 17 in the current survey year to the same estimate for the prior applicable survey year.

To illustrate the calculation, consider past year use of alcohol. In the survey, , persons who were 12 years old were estimated to have initiated use of alcohol between 1 and 2 years earlier.

These persons would have been past year initiates in the survey conducted on the same dates had the survey covered younger persons. The estimated number of lifetime users currently aged 12 to 17 was 7,, for and 8,, for , indicating fewer overall initiates of alcohol use among persons aged 17 or younger in Thus, an adjusted estimate of initiation of alcohol use by persons who were 11 years old in is given by.

This yielded an adjusted estimate of 96, persons 11 years old on a survey date and initiating use of alcohol in the past year:. A similar procedure was used to adjust the estimated number of past year initiates among persons who would have been 10 years old on the date of the interview in and for younger persons in earlier years.

The overall adjusted estimate for past year initiates of alcohol use by persons 11 years of age or younger on the date of the interview was ,, or about 3. Based on similar analyses, the estimated undercoverage of past year initiates was 2. The undercoverage of past year initiates aged 11 or younger also affects the mean age at first use estimate. An adjusted estimate of the mean age at first use was calculated using a weighted estimate of the mean age at first use based on the current survey and the numbers of persons aged 11 or younger in the past year obtained in the aforementioned analysis for estimating undercoverage of past year initiates.

Analysis results showed that the mean age at first use was changed from The decreases reported above are comparable with results generated in prior survey years. Specifically, for marijuana, hallucinogens, inhalants, and tranquilizers, a respondent was defined as having dependence if he or she met three or more of the following six dependence criteria:. For alcohol, cocaine, heroin, pain relievers, sedatives, and stimulants, a seventh withdrawal criterion was added.

The seventh withdrawal criterion is defined by a respondent reporting having experienced a certain number of withdrawal symptoms that vary by substance e. A respondent was defined as having dependence if he or she met three or more of seven dependence criteria for these substances. For each illicit drug and alcohol, a respondent was defined as having abused that substance if he or she met one or more of the following four abuse criteria and was determined not to be dependent on the respective substance in the past year:.

Criteria used to determine whether a respondent was asked about the dependence and abuse questions during the interview included the core substance use questions, the frequency of substance use questions for alcohol and marijuana only , and the noncore substance use questions for cocaine, heroin, and stimulants, including methamphetamine. Missing or incomplete responses in the core substance use and frequency of substance use questions were imputed.

However, the imputation process did not take into account reported data in the noncore i. Very infrequently, this may result in responses to the dependence and abuse questions that are inconsistent with the imputed substance use or frequency of substance use. For alcohol and marijuana, respondents were asked the dependence and abuse questions if they reported substance use on more than 5 days in the past year, or if they reported any substance use in the past year but did not report their frequency of past year use i.