Empirical Research Methods PDF

Title Empirical Research Methods
Course Empirical Research Methods
Institution Technische Universität München
Pages 24
File Size 951.5 KB
File Type PDF
Total Downloads 70
Total Views 140

Summary

Sommersemester...


Description

1 INTRODUCTION – WHAT IS THEORY? Empirical Social Research: Structure and Core Themes  Core Problem: All decisions rely on empirical data  Empirical research is basis for life improvement  Evidence-based management (EBM) process

Empirical Research Methods Decisions can only be as good as available data  Any recommendation, decision, policy etc. is only as good as data it builds on  Anyone gathering/ presenting data may have an agenda - Data itself neutral - Data gathering can be manipulated - Data interpretation is flexible  Understanding how data is accurately gathered and interpreted is essential for you as a producer and as a consumer What does this mean for this class  Good management requires evidence (Facts vs. opinions; Fewer errors, better practices)  Evidence comes from data and data interpretation  Data was to be well-collected and well-interpreted  How does data become evidence? - Evidence should mean that specific interpretation of data has been confirmed (it is found over and over again) - Data interpretation should make projections beyond the data collected: if this interpretation is correct, in future you should also find this; Projections can be tested What is theory and why do we need it? What is theory  Causal: It explains why and/or how  gives meaning  Aggregate of propositions - Proposition: One causal statement linking two constructs - Construct: Usually not directly observable (Eg illness is not directly observable, the symptoms are) - Key constructs: intelligence, motivation, performance,… Vs objects that "exist" like rocks, pencils  Testable through hypotheses - Hypotheses: One testable statement derived from the theory linking two measurable variables (Eg if intelligence is said to impact performance, we can test that by measuring whether IQ impacts grades)

2 - Raises first big issue: operationalization Based on assumptions Allows for some prediction: Given circumstances, if I do A that B should happen  Theory allows for - Abstraction, Generalization, Intervention - (Causal) Learning, Power, Survival,… Theory is generated through scientific processes  Process relies on empiricism: Things need to be observable - Only what is empirically observed can become basis of theory - Difference in what is observable: Science vs. philosophy  Observable data need to be collected objectively - Objectivity is impossible - Replace by: replicable/ reproducible and testable intersubjectivety - Means that research methods have to be disclosed and described  Control: Data needs to be collected without bias - Correct selection of people and method - Correct application of selected method  

HOW DOES THEORY ADVANCE? The Scientific Process Recap: Theory  Theory as a set of causal conjectures  Good if corroborated causal conjectures  Corroborated doesn’t mean correct  Means that it has not yet been found to be incorrect  Popper: We can never know sth is true (truth can never really be reached in the research process, you only get better predictions) - Idea of falsifiability: every good theory must potentially be wrong - Predictions/tests must be specifiable that can have a negative outcome - Good theory makes lots of predictions beyond its past evidence base - If predictions aren’t false, theory is corroborated What does this mean for research?  Theory moves forward through bold conjectures (Popper)  Importance of replication  Conduct in research process: Improvement in public discourse; Increasing understanding leads to increased expectations of rigor  Philosophical questions (eg does one rejected hypothesis mean we have to bury the theory?) What makes theory good?  Falsifiability (need to show that theory can be wrong)  Accuracy: which theory is better at explaining/predicting?  Parsimony: If equally well, which theory needs to make fewer assumptions? Definitions  Theory: Set of causal connections  Proposition: Statement between two constructs  Model: Simplification of a theory; summary of what we're measuring

3  Phenomenon: Sth that we look at (directly observable)  Construct: Sth we cannot measure (water wet?) Limits of the hypothetico-deductive methods  Deduction: Process of reasoning from general proposition/theory to specific implications  Hypothesis can then be tested to draw inferences on larger theory  Induction: Abstracting proposition from data  Empirical generalization  Theory usually moves forwards by combining both  Philosophical issue: Induction cannot be proven to be correct Formulating hypotheses  Simple (vs complex); Specific (vs vague); Causal; Measurable  Connects two things (A&B)  Clear; Testable Where do Hypotheses come from? The value chain of empirical research

State of current research  What is a field of research - Shared phenomenon of interest - Shared level/object of analysis (shared assumptions; same paradigm) - Shared theoretical perspective (shared assumptions; same paradigm) - Nested structures of specialization, eg Management (organization theory)  Theory as ongoing conversations - Who is talking to who? About what? Where are they? - What would they be interested at? What do you have to say? … - Answer these questions to make contribution that is New, Interesting, Relevant Research design  Selection of method of data collection  Operationalization ( Developing a measure)  Inspecting the criteria of goodness Data collection  Sampling, Pretesting/pilot testing  Data collection

4 Data analysis  Data preparation (coding, data cleaning,…)  Descriptive statistics, inferential statistics Publication  Interpretation od results  Writing research paper, submitting it to scientific journal  Revision, publication RESEARCH ETHICS 



Vital aspect of the research process - Part of the very fabric of science - Guarantee proper conduct of researchers - Essential drivers of public perception of research - Consequences also with respect to funding, future participation,… Some aspects of research ethics are clearer than others - Eg word count for plagiarism, self-plagiarism - But: rules, procedures do exist and must be adhered to

GOOD RESEARCH, GOOD JOURNALS, WHY GOOD RESEARCH The publication process Why do we care about publications?  Academic debate advances through publications: Process and its standards must not be end in itself  Process aims at ensuring rigor (Methodological standards; Novelty)  Basis: peer review - Depending on the journal, certain number of experts judges novelty and quality - As long as not verified, its and opinion - Process can be manipulated Publication process

Corollaries of the process  Editors are key gatekeepers in scientific process - Ideally, only highly-regarded individuals appointed, often elected - Mist journals run by board of editors to cover subfields, different perspectives, geographies,… - Being editor in hey journal may have more effect than authoring

5 -



  



Internationally a key promotion criterion, in particular for senior scholars Reviewing is essential to scientific process - Good reviewers help make decent papers better, filter out bad ones - Reviews often rated; good reviewers more likely to become editors - Rule of thumb: do three reviews for every submission Editor and reviewer are always right - Although it may sometimes not be clear what they actually mean - Parallel: Theses feedback Reviewing process to ensure academic standards - Not flawless, but best available system - Openness of science allows post-hoc corrections Quality of research difficult, but not impossible to measure - Nothing beats reading actual work, but sometimes, you cannot - Several proxies for quality exist - Agreement on proxies makes them proxies for quality Quality measures have tremendous impact - Hiring and promotion - Funding

Measuring Impact What criteria for evaluating research(ers) exist?  Read article  Who downloaded paper  Rankings  Citation  Impact factor of a journal: Number of citations measured regarding the years Categorizing impact measurements  Inputs vs. outputs  Quality vs. quantity  Individual vs. aggregate  Direct vs. indirect  Easy-to-count vs. hard-to-establish RESEARCH QUESTIOIN AND THEIR IMPLICATIONS Getting to Good Research Questions Starting off on the right foot  Good research question is relevant (relates to ongoing conversation), novel (says sth that has not yet been said) and interesting  What is interesting? - Simple view: See question and think that’s interesting - Counter-intuitive: Wouldn’t have thought about that; Contradictory to existing findings - Bringing together currently conflicting views: Potentially even competing explanations for same phenomenon  Hint: Interestingness always helps: for you and the audience  May also decrease expectations toward methodological rigor

6 Research Question  Start from practice: Try to explain phenomenon that boggles you  Start from literature - Conflicting results: Previous studies on same topic don’t converge - Boundary conditions: Under what conditions does theory hold? - Increase specificity: Theory hasn’t been applied here (boring?) - Study new phenomena: Hasn’t been explored before - Suggestions for future research  RQ as source of everything - Defines conversation in which to engage in - Defines what researcher needs to do, with little choice afterwards - Defines structure of output (paper, thesis,…) Designing Actual Research Questions Specifying the research question  Simply put, RQ can be at different levels  Initial question often too broad for testing or not what you intend to test  Research project needs focus: only study one thing at a time  E.g.: Either take-for-granted that discrimination exists or test whether discrimination is a reason  Key question: What exactly are you studying? Good RG comes with precision  Either question itself or derivative of it must be precise  Question usually formulated at level of constructs - Discover relationships and/or explain the why: qualitative work - Corroborate relationships, measure effects: quantitative work  Fundamental difference - Can only define starting point for qualitative work, maybe direction - In quantitative work, you have to logically deduce what you want to test Deductive work: From RQ to hypothesis  Start with RQ  Choose which conversation(s) you participate in  Build argument through logic - Work toward not-yet-tested causal relationship between two constructs - Explain: Why should this concepts be linked - Note: Huge differences in degree of storytelling between disciplines  State the expected, novel relationship as formal hypothesis CHOOSING A RESEARCH DESIGN Designing Research Well Systematization of data collection methods

7

Choice of methods is still always problematic

Each class of research methods comes with generic (dis)advantages; are always there  Following McGrath, you can only optimize one of three goals - Generalizability (A): Reach as many people as you can - Precision (B): Have full control over study - Realism (C): Degree to which actual phenomenon is studied (vs. lab)  Maximizing one means accepting weakness in the other  choose wisely - Intermediate degrees exist (get OK values on 2 of 3) - What does audience prefer? - What does research question mandate? Research question, conversation and methods 

8

 Similar, yet not same way of classifying research methods:  Descriptive research: Snapshot of status quo  Correlational: Establishing link between at least two variables  Experimental: Understanding cause of things Mixed-methods approaches  Given all methods are imperfect  Impossible to design perfect single-method study  Two ways of fixing - Multiple contributions to conversation (possibly by different authors) - Mixed-methods papers  Mixed methods papers allow for triangulation Sampling Sample and population  Having decided how you will approach people, next question is who you want research  Sampling  Sampling strategy must be fully aligned with research question - Incorrect sampling means you cannot draw conclusions about phenomenon you wanted to study - Key: Representativeness  People that you work with must be representative of all people you could have worked with (population) - Anything can be population (depends on RQ) From population to sample  Who do we need to work with? - Proof-of-concept or understanding of specific process (over time) requires theoretical or purposive sampling - Studying (differences in) effects for larger group of people, firms, countries,…means it's best to work as closely as possible on population to guarantee generalizability  Only work with subgroups: The sample - Cheaper - Easier to clearly demarcate and control for external influences, bias

9 Actual populations are gathered very rarely - Census - Every person in this room (But: only a sample of all people in course) Sampling Techniques: 1. Probability sampling  Each unit within population has known chance of being selected  (Simple) random sampling - Each member has equal probability of being selected - Use of random numbers applied to list of entire population - Purely random samples hard to achieve but can come close  Stratified sampling - Whole population segmented into mutually exclusive subgroups/strata, and then units are randomly selected from each stratum Sampling Techniques: 2. Non-probability sampling  Chance of being selected of each unit is unknown or predefined  Convenience sampling - Respondents are selected, in part or in whole at the convenience of researcher (with no or little effort to achieve accurate representation) - Generalizing result is difficult - Can provide useful info, especially in pilot study  Quota sampling - Convenience sample, with effort made to ensure certain distribution of demographic variables  Snowball(ing) sampling - Usually employed to access hard-to-reach populations and particular subgroups in the population  Judgment sampling - Researcher uses his judgment in selecting units from population for the study - If population to be studied is difficult to locate or if some members are thought to be better/ more knowledgeable/ willing - Determination often made on advice and with assistance of the client  Theoretical sampling (usually for qualitative work) - Selection of extreme cases - Selection of specific/ typical cases Threats: Sampling is strongly linked to validity  External validity - Generalizability to the population - Ecological validity: Can results be generalized beyond the controlled/ manipulated setting)  Internal validity - Selection bias - Drop-out - Events happening to participants during the research - Change in participants motivation - Inconsistency in instructions - Low power of small sample - Non-response (by a person or on a question) 

10 

Note: Random error vs. systematic error

QUALITATIVE RESEARCH The Case for Qualitative Research Why to use it  Document, describe (“What is occurring?”) - First observation may not be subject to explanation by existing theory - Qualitative work is usually fieldwork  Explain (“How/why is it occurring?”) - Understand larger processes in reality, see big picture - Vs. quantitative work: See micro-mechanisms at work, understand why - Vs. experiment: Causal processes may involve more than one factor  As result, qualitative research in three ways - Theory testing: Existing theory determines research design - Theory elaborating: Pre-existing conceptual ideas; open issues in theory - Theory generating: Explain what hasn’t been looked at before There is no “one” method  Defining characteristics of qualitative work - In the field - Data derived from perspective of participants; no imposed perspective - Flexible, reflective research design; methods are fluid, can/should change - No standards for observation, interpretation; issues for reliability, validity  Wide range of methods fulfills these criteria - Case study research - Ethnography - In-depth interviews  Not so clear (given that objects of study are fixed, often quantifiable) - Text/content analysis (can produce both qual. & quant. data) - Discourse analysis (probably class of methods itself) Ethnography  Substantial amount of time interacting with orga  Continuum from complete observer to complete participant  Participant-observer: Member of orga, overtly doing research  Observer-participant: Non-member (outsider), overtly doing research  Goal: Become immersed in context to understand meaning, causality  Problem: Going native (how to keep objectively; distance necessary) In-depth interviews  Question of theory-method first  Different choices to acquire knowledge from participants (Scenarios, Story-telling; Critical incident technique)  Tape-record (and transcribe) or take lots of notes

11 

Find patterns in interview data (coding)

Case Study Research Case studies a la Eisenhardt  Getting started - Definition of research question  Focuses efforts - Possibly a priori constructs  Provides better grounding of construct measures - Neither theory nor hypothesis  Retains theoretical flexibility  Selecting cases - Specified population  Constrains extraneous variation and sharpens external validity - Theoretical, not random, sampling  Focuses efforts on theoretically useful cases (those that replicate or extend theory by filling conceptual categories)  Crafting instruments and protocols - Multiple data collection methods  Strengthens grounding of theory by triangulation of evidence - Qualitative and quantitative data combined  Synergistic of evidence - Multiple investigators  Fosters divergent perspectives and strengthens grounding  Entering the field - Overlap data collection and analysis, including field notes  Speeds analysis and reveals helpful adjustments to data collection - Flexible and opportunistic data collection methods  Allows investigators to take advantage of emergent themes and unique case features  Analyzing data - Within-case analysis  Gains familiarity with data and preliminary theory generation - Cross-case pattern search using divergent techniques  Forces investigators to look beyond initial impressions and see evidence thru multiple lenses - On the difficulty of coding and data categorization o Data that qualitative research produces is typically unstructured (Text, videos, transcripts, interview data,…) o To identify patterns, data is usually aggregated to higher-order constructs o Difficulty lies in reliably identifying a pattern - Openness of data collection and analysis methods  trust - Show the data and let it speak for itself (quotes etc.) - Standardized procedures - Replication logic - Multiple coders o Usually supported by use of special software - Coding is most difficult part o Several recommendations/approaches as to how to do it

12 o Commonalities - Coding is for data reduction (significantly fewer codes than interviews; multiple instances for each code) - Usually begins with reading and marking - Iterative aggregation – several levels (aggregation techniques is where major differences between approaches are found)  Major changes require going back - Theorizing happens on top-level  that’s where the theoretically relevant construct should emerge o Research design (theory generating, elaborating, testing) determines availability of coding scheme in the beginning  Should not limit flexibility  Shaping propositions - Iterative tabulation of evidence for each construct  Sharpens construct definition, validity and measurability - Replication, not sampling, logic across cases  Confirms, extends and sharpens theory - Search evidence for why behind relationships  Builds internal validity  Enfolding literature - Comparison with conflicting literature  Builds internal validity, raises theoretical level and sharpens theory - Comparison with similar literature  Sharpens generalizability, improves construct definition and raises theoretical level  Reaching closure - Theoretical saturation when possible  Ends process when marginal improvement becomes small

INTERVIEWING PRAC...


Similar Free PDFs