Posts

Using Qualtrics survey logic to eliminate redundancy in simple experiments

Image
I just reviewed a Qualtrics survey for a 3x2 between subjects experiment which repeated 3 times within subjects. The survey designer was fairly inexperienced so it's not surprising that they used 36 blocks with 5-8 questions each and had incredibly complex logic to accomplish this. Unfortunately, there was so much redundancy and complexity that I wasn't able to error-check the survey instrument. I gave the student suggestions for how to go from 36 blocks with complex survey logic to 1 simple block with straightforward logic. My recommendations follow below in case others find it useful. The stimuli were text-based (no images), so my suggestions here focus on that.  Between subjects differences:  Use Embedded Data.  Embedded data allows you to assign conditions and specific text to individual participants. The conditions can be randomly assigned by simply using a randomizer in the survey flow.  Here are the steps: Edit survey flow Add randomizer Within the randomizer, add embedd

About that metaphor priming paper...

Image
I recently reported this paper for impossible means [ link ]. A few notes about the process and paper: I was very pleased with how Patricia Bauer, the editor of Psych Science handled everything. She was super appreciative and kind. And Greg Francis did an excellent job on the analysis and writeup I thought. I have zero complaints about the process. They also handled everything very quickly.  I would like to give more context about this paper.  Quentin Andre hit the nail on the head by pointing out the effect size issues [ link ]. d=1.3 is not super likely with a metaphor priming study.  Some of the means are impossible. Either there were unexplained errors or the summary statistics were generated without any participant data. To be clear, I didn't expect to see any participant data and none was provided by the authors. This paper has two replications. The first replication effort (Firestone & Scholl 2014) was marginally successful but the authors claimed the effect was due to a

Designing, running and analyzing survey data during one class period

Image
Today in my undergraduate Marketing Research class my students and I designed a survey to collect their opinions of State Farm's decision to pull Aaron Rodgers ads. We were able to design the survey in Qualtrics, get students' responses, and analyze the data during a 75-minute class period. This was made possible through the use of Statscloud which allows for lightning fast analysis on the fly.  The most difficult thing was getting them the data. If I had it to do over again, I would upload the Qualtrics data to Google Sheets, remove rows and columns I don't want, then send them a share link. That's sort of what we ended up doing but it took a while to get there.  It was also very difficult to create unbiased questions on the topic but it made for good discussion. I tried my best to be completely neutral during the discussion so that I wouldn't in any way bias the students' responses (they're pretty opinionated though so I doubt I could have swayed them if

Feedback request: How should I compare direct vs. conceptual replications in meta-analysis?

Are conceptual replications truly superior to direct replications? I don't think so. I am planning to do a p-curve comparison of the two. What else should I be asking to help settle this debate? Please comment below about what you would like to learn about the differences. Or you can comment on Twitter and be sure to "@" me (@aaroncharlton).  Background  While other social sciences have pushed for more preregistered direct replications, marketing editors who are open to accepting replications have shown a strong preference for conceptual replications (Lynch et al. 2015). As far as I have been able to ascertain through extensive research and asking on Twitter, only ONE preregistered direct replication has ever been published in a marketing journal (good job Burak and Evrim!) (Tunca and Yanar 2020). While marketing has an abysmal record for preregistered direct replications ( 1/14 unambiguously successful ), the conceptual replications have mostly worked (Lynch et al. 2015)

Conflicts between Dan Ariely's statement and Footnote #14 (DataColada #98)

Image
So cool that another fraudulent paper was discovered and outed. I noticed that there were conflicts between the author's statement (he seems to blame his industry partner?) and other facts of the case. I just wanted to highlight the conflicts here because these are things that we need explained better if we are going to trust this author going forward. The author is Dan Ariely by the way. This refers to Data Colada #98 . First of all, let's look at Dan Ariely's statement: The data were collected, entered, merged and anonymized by the company and then sent to me. This was the data file that was used for the analysis and then shared publicly. I was not involved in the data collection, data entry, or merging data with information from the insurance database for privacy reasons. [ link ] But what are the conflicts with Dan's statement? According to Excel meta data, Dan Ariely both created the Excel file and was the last person to modify it before sending it in its fraudulen

Rampant speculation is the outcome of institutional silence around retractions and firings

Image
I noticed that after two marketing researchers (Nicole Coleman and Ping Dong) lost their jobs and got a series of retractions, there has been a wave of speculation. Not gossip about the two researchers as I would expect, but distrust of institutions; feelings of "who is the next junior researcher to be thrown under the bus?" The retraction statements are vague. The only thing that they have made clear is that the more senior co-authors are not at fault. The universities to my knowledge have not released any information at all. Here are a few examples of the kind of talk I am referring to:  From the Ping Dong retraction [ link ]: From the Nicole Coleman retraction [ link ]: Another reaction to Nicole Coleman incident: Also, I recently noticed someone post a similar sentiment on Twitter. What is going on with our journals and universities that there is so much distrust? And why is there generally so much distrust of the senior coauthors? 

The salvaging of a Ping Dong paper at Journal of Marketing

Ping Dong resigned from Northwestern and disappeared, ghosting her coauthors. She had numerous articles retracted, including 3 from top marketing journals [ link ].  Somehow, her coauthors of one JM article convinced JM to drop her as 1st author, let them rerun all the studies (except for the first one which was a field study) and republish it with correction.  I discussed it on Twitter. See the thread here: Journal of Marketing has never retracted an article to my knowledge. But they had a Ping Dong paper, which posed a problem since her papers were getting retracted everywhere thanks to @lakens , @LisaDeBruine and others. Hmmm...what to do. pic.twitter.com/JWejxplMsC — Aaron Charlton (@AaronCharlton) July 9, 2021 I should note, though, that I was careless in describing how Ping Dong was outed. What I meant to say is that Daniel Lakens and Lisa DeBruine drew attention to her early on, and that this may be related to her getting into trouble, but the details of what happened are no