Skip to main content
It looks like you're using Internet Explorer 11 or older. This website works best with modern browsers such as the latest versions of Chrome, Firefox, Safari, and Edge. If you continue with this browser, you may see unexpected results.

Library Assessment

Assessment – evaluation methods used to assess the needs of library users and other groups to determine if needs are being met or what improvements should be made

Best Practice – a technique or method that has consistently proven to be more effective than other techniques or methods

Bias – an attitude, opinion, perspective, or point of view that inhibits objectivity and skews the findings, results, and conclusions of an assessment

Institutional Review Board (IRB) – an ethics committee that reviews research proposals to assess the risk to subjects and enforce ethical standards. IRBs have the authority to reject or suggest revisions for research proposals at their institutions

Method – a specific design used to test a research question

Methodology – a theory or approach to research analysis and its practice

Metric – a meaningful, verifiable measure, it may be qualitative or quantitative

Theoretical Framework  assumptions for research based on ontological and epistemological understanding

Baseline – condition, status, or performance prior to program implementation or intervention

Benchmark – a standard or measure against which progress toward specific goals may be compared or assessed

Goal – the desired end result of an activity; They should be specific, measurable, achievable, relevant, and time-bound (SMART). Goals should be formulated before beginning assessment research or implementing a program to provide measurable variables that can be used to formulate program structure or for later assessment.

Indicator (key performance indicator) – milestones that are created from breaking down goals into smaller steps to measure progress toward goals

Input Measure – the measure of resources such as materials, labor, etc. that are used to generate an output

Learning Outcome – a broad educational goal that the student is expected to achieve by the end of the course, relative to some knowledge or skill. Outcomes may be broken down into smaller and more specific learning objectives

Output Measure – results of an activity measured in quantitative form

Performance Measurement – used to measure quality, or suitability of purpose, of a library product or service. High-quality performance is measured based on the needs and expectations of the users, which may be relative to an institution

Stakeholders – any and all populations that may be involved with the library directly or indirectly, including but not limited to: users, schools, partner organizations, government agencies, etc.

Contingent Valuation – a method of estimating the value that a person places on a good

Return on Investment (ROI) – assessment of the rate of return of investments in library resources and/or services

Triangulation – approaching a research question by collecting data using more than one method

Measures

ANOVA (Analysis of Variance) – a statistical model that measures the variation between group means; The p-value produced determines whether the results are significant or not.

Correlation – the measurement of two variables to determine their relationship (not cause and effect); In statistics, it is abbreviated as r. The range of correlation is -1.00 (negative correlation) to 0 (no correlation) to +1.00 (positive correlation).

Descriptive Statistics – quantitative statistics used to summarize the general features of the population being studied; Demographic information is one type of descriptive statistics.

Inferential Statistics – probabilistic statistics used to infer or predict something about a larger population from a sample, for example, correlation

Causation – demonstrates cause and effect; because of X, Y happened

Linear Regression – a statistical method that models the relationship between the dependent variable (y) and one or more independent variables (x); It can be used to predict results based on the model’s linear representation of data, or it can be used to determine the relationship between the dependent and independent variables.

P-value – a number from 0 to 1 used to calculate the probability that the results of a study are significant; This means the results obtained are more likely due correlation or causation (alternative hypothesis) than chance (null hypothesis). A significant p-value is usually considered to be less than .05 or .01 depending on the stringency of the criteria.

Paired T-test – a method that determines whether the means of the pre- and post-test results of a group or the means of matched pairs are statistically different from one another; This calculation will produce either a positive or negative t-value. The significance of the t-value can be located in a table after also determining the alpha level (α–like margin of error, usually set at .05) and the degrees of freedom (sum of persons/units in both groups minus 2).

Standard T-test – a method that determines whether the means of the results of two independent groups or variables are statistically different from one another; See above for further explanation.

Statistical Significance – the probability that results of a study are due to cause-effect or correlation rather than to chance; This measure generally rejects the null hypothesis (that results are due to chance) if a p-value less than .05 or .01 is obtained.

Samples

Census (complete enumeration) – a study of everyone or everything in a population as opposed to a sample, which measures part of a population

Confidence Interval – likelihood that the sample you selected or the sample that responded/volunteered is representative of the population you are measuring; The standard desired confidence interval is between 90-99%, usually 95%. A 95% confidence interval suggests that, if tested again, the same results would occur every time.

Convenience Sampling – a non-random section of the population; This occurs when not every member or unit has an equal chance of being selected for study, such as when volunteers are used.

Margin of Error – the percentage of possible errors in a sample size, between 1-10% and usually 5%. If 40% of your population answers “yes” on a survey and your margin of error is 5%, then 35-45% of the population would respond “yes” for that item

Random Sample – every member or unit has an equal chance of being selected for the study

Sample Size (survey sampling) – the percentage of the population that responds to your survey; In order for your sample size to be representative of your desired population, you must calculate your desired margin of error and confidence interval.

Sampling Method – scientific method for selecting units to test that will be used to make inferences about a population

Self-Selected Sample – a sample is determined by whether the members of a population can agree or decline to participate in the sample, implicitly or explicitly.

Indirect Evidence – what respondents say they do, have done, or did

Direct Evidence – what respondents are observed to have done

Regression Analysis – a powerful statistical method that allows you to examine the relationship between two or more variables of interest. While there are many types of regression analysis, at their core they all examine the influence of one or more independent variables on a dependent variable

Deep Log Analysis  a methodology that helps librarians, publishers and other suppliers of web-based content to a better understanding of how consumers actually use their services. By analyzing raw transactional server-side logs: for example, session length, number of content or other pages viewed, whether or not an internal search engine was used, which titles and subjects were viewed and when an access took place. This data reflects what people actually do online and not what they think they did or what they think they ought to say to a researcher.

Longitudinal Analysis – a study that takes place over a long period of time

Generalizability (external validity) – extent to which results of a study can be ascribed to other groups or real-life situations; Replication of the study using different subject populations and settings can demonstrate generalizability of results.

Inter-rater Reliability – the extent to which different raters agree on a score or measurement assigned; If scores are similar, then a problem exists with the measurement or standards have not been well defined to judges.

Reliability – the extent to which a method consistently measures the same variable

Representativeness – estimates the probability of the reoccurrence of a study’s results based on the degree to which the sample is an accurate representation of the population measured in the study

Selection Bias – error in choice of individuals or populations included in a study, which distorts results

Validity – the extent to which a method accurately reflects or measures the specific concept that the researcher is assessing

Big Data – a group of data sets difficult to analyze and process using traditional methods; These sets require advanced technology and methods to manage data.

Data Hacking – the practice of using open source data in innovative ways to address needs or assess a problem

Open Data – data that is freely available to the public and can be used and republished by anyone without legal barriers of copyright, patent, or other restraints, although attribution is still required