Invited Speaker Series 2014-2015

Fall 2014

John Karro | Miami University

"Statistical and Computational Challenges in DNA Forensics"

Abstract

As portrayed on C.S.I. and other popular television shows, DNA Forensics is a powerful and infallible method of identifying criminals and separating the innocent from the guilty. In reality it is a powerful method that is justifiably referred as the most scientific branch of forensic investigation; what it is not is infallible. Mistakes happen; innocent people are convicted on bad DNA evidence. The sources of such error are wide-spread, ranging from problems with collection and lab errors to the use of statistics built on bad assumptions, or the simple failure of the lawyers to draw justified inferences from statistical. (Or to correctly explain those inferences to a jury.)

Rong Zhou | Medpace

"Practicing Statistics in the Research and Development of New Drugs"

Abstract

The talk will consist of two parts. In the first half, the pharmaceutical industry and drug development procedures will be introduced. The second half will focus on the statistical design and analysis for clinical trials. The research and development of drugs can be traced back to the beginning of 1900s in Europe and has evolved over time, especially in the most recent decades. A general introduction will be provided for these topics: drug development industry history, size and current breakthrough in research, phases in the development of a new drug, the regulatory procedures for generic drugs, types of study design, and the involvement of different functions for a clinic trial. During a typical clinical trial, a statistician will be involved over the entire course of the study with activities including designing the trial with sample size calculation, developing an algorithm for randomization, drafting the statistical analysis plan and programming based on the study data, and interpreting the study results in the clinical study report. Biostatisticians will determine the appropriate statistical models to be used for continuous data, categorical data, questionnaire data, time-to-event data, and other source data.

Brian Sampsel | Express

"Quantitative Careers in the World of Retail"

Abstract

This talk will focus around career opportunities available for those with a quantitative background in the world of retail. We will cover both the retailer side as well as the consulting side, and some pros/cons of both perspectives including real world examples.

Alex Martishius | Fifth Third Bank

"Quantitative Leadership at Fifth Third Bank"

Abstract

Statistical models have become a large part of how regional and national banks project income and losses. Models are built to forecast loan balances, probabilities of default, losses, exposures, and many other quantities of interest across all lines of business. Additionally, the Federal Reserve has mandated that all bank-used models be independently validated before use and at regular time intervals during use. The banking industry is beginning to trend favorably toward quantitative analysis, so demand for model building and model validation analysts is higher than ever at Fifth Third. The bank needs, in addition to employees with a quantitative background, future leaders who are willing to help develop the bank’s quantitative capacity from the ground up. The Quantitative Leadership Program (QLP) is designed to attract candidates with a strong quantitative background who also possess the necessary skills to become future leaders at Fifth Third. For this presentation, I’ll talk about what types of projects the bank works on from a quantitative perspective, what Leadership Participants work on in addition to this baseline, and how Miami statistics graduates can help the QLP program to flourish.

Chris Groendyke | Robert Morris University

"Epidemiological and Financial Applications of Statistical Network Science"

Absract

After providing a brief overview and introduction to statistical network science, I will present research I have conducted in the subject area. I will discuss how network models can be used to study the spread of epidemics through a population, and in turn what epidemics can tell us about the structure of the underlying population. In particular, I will share an analysis of a measles epidemic that spread through the town of Hagelloch, Germany, in 1861. Finally, I will explore how networks can be used to better understand the structure and vulnerabilities of financial systems.

Chris Franklin | University of Georgia

"Statistics and K-16: Great Opportunities and Challenges"

Abstract

The United States is struggling to achieve a level of quantitative literacy for its graduates to prepare them to thrive in the modern world. Given the prevalence of statistics in the media and workplace, individuals who aspire to a wide range of positions and careers require a certain level of statistical literacy. Because of the emphasis on data and statistical understanding, it is crucial for us as educators to consider how we can prepare a statistically literate population. Students must acquire an adequate level of statistical literacy through their education beginning in kindergarten. The Common Core State Standards for mathematics (that include statistics) in grades K–12 have been adopted by the most states and the District of Columbia. The standards for the teaching of statistics and probability range from counting the number in each category to determining statistical significance through the use of simulation and randomization tests. Soon, and for the first time, most of our entering college students will have been taught some statistics and probability, so our introductory college statistics course will have to change. In addition, we must rethink the preparation of future K–12 teachers to teach this curriculum. K-12 teachers must be well versed in statistics. Change in teacher preparation must thus be implemented in order to respond to the call from society for an increase in statistical understanding. This presentation will provide an overview of the statistics and probability content of these standards, consider the effect in our introductory statistics course, and describe the knowledge and preparation needed by the future and current K–12 teachers who will be teaching using these standards. A new ASA strategic initiative, the Statistical Education of Teachers, will be outlined and the desired assessment of statistics at K-12 on the high stakes national tests will be explored.

Spring 2015

Xiaoyan (Iris) Lin | University of South Carolina

"Simultaneous Modeling of Propensity for Disease, Rater Basis and Rater Diagnostic Skill in Dichotomous Subjective Rating Experiments"

Abstract

Many disease diagnoses involve subjective judgments. For example, through the inspection of a mammogram, MRI, radiograph, ultrasound image, etc., the clinician himself becomes part of the measuring instrument. Variability among raters examining the same item injects variability into the entire diagnostic process and thus adversely affects the utility of the diagnostic process itself. To reduce diagnostic errors and improve the quality of diagnosis, it is very important to quantify inter-rater variability, to investigate factors affecting the diagnostic accuracy, and to reduce the inter-rater variability over time. This paper focuses on a subjective binary decision process, proposing a hierarchical model linking data on rater opinions with patient disease-development outcomes. The model allows for the quantification of patient-specific disease severity and rater-specific bias and diagnostic ability. The model can be used in an ongoing setting in a variety of ways, including calibration of rater opinions (estimation of the probability of disease development given opinions) and quantification of rater-specific sensitivities and specificities. A Bayesian estimation algorithm is developed. An extensive simulation study is conducted to evaluate the proposed method, and the method is illustrated with a mammogram data set.

Nicholas Horton | Amherst College

"Teaching Precursors to Data Science in Introductory and Second Courses in Statistics Ed Boone Virginia Commonwealth University Combining Bayesian Statistical Thinking with Deterministic Modeling"

Abstract

Statistics students need to develop the capacity to make sense of the staggering amount of information collected in our increasingly data-centered world. Data science is an important part of modern statistics, but our introductory and second statistics courses often neglect this fact. This talk discusses ways to provide a practical foundation for students to learn to “compute with data” as defined by Nolan and Temple Lang (2010), as well as develop “data habits of mind” (Finzer, 2013). We describe how introductory and second courses can integrate two key precursors to data science: the use of reproducible analysis tools and access to large databases. By introducing students to commonplace tools for data management, visualization, and reproducible analysis in data science and applying these to real-world scenarios, we prepare them to think statistically in the era of big data.

William Brenneman | Procter & Gamble

"Practicing Statistics in Corporate R&D"

Abstract

Modern industry is constantly seeking to efficiently produce new and improved products. Statisticians play a central role in helping the product team quickly identify areas for improvement and optimization. Many of the problems faced in industry can be solved with known statistical methods, while occasionally there are problems encountered that require original research. For a research statistician practicing in industry, these types of problems are a joy to encounter and an opportunity to contribute. I will discuss several examples of opportunities identified in my career at Procter & Gamble, some of which led to subsequent statistical research. The problems will be framed in easy to understand language, with just enough technical details to appreciate the work. Throughout I will point out how both hard and soft skills are needed to be successful in the corporate world.