Hacker News new | past | comments | ask | show | jobs | submit login
PsyToolkit: Create and run cognitive psychological experiments in the browser (psytoolkit.org)
187 points by catchmeifyoucan on June 23, 2019 | hide | past | favorite | 25 comments



I wonder how accurate response times generated by these experiments are (and/or, how they define 'response time', which is sort of the same question).

E.g. if response time is defined as the time between when the visual stimulus appears on the physical screen and the time when a click or keypress is registered, there's quite some things to take into account. I don't know browser internals, but the code 'show stimulus' doesn't lead to the stimulus being on the screen immediately. Calculation takes time. Then the rendering takes time and it's normally going to take at least a couple of frames, plus extra latency introduced by some monitors, etc, before anything effectively changes on the screen. And this might not be a fixed number (best case scenario it's just noise). And there might be an unknown amount of input latency on the mouse/keyboard/touch screen/... And that all might change if you hook up a different screen or switch browser.

Now this doesn't matter a thing for toy experiments, or when doing A/B and the noise is the same for A and B, but suppose you want to do an actual experiment to measure response time to a simple and complex stimulus and for some reason the complex stimulus consequently takes a longer time to make it to the screen whereas the subject's actual response time remains the same, but you start measuring from the time where you sent the 'calculate and show stimulus' command, you might draw the wrong conclusion. From experience I know such mistakes get made, and if lucky they get discovered before publishing. But I wouldn't be certain that is always the case.


That's a fantastic question, and a worthwhile research topic in itself! You're right in your concerns, and indeed there's been an extensive discussion in the literature about how valid browser-based experiments are for basic research in psych.

If you're interested, the authors of jsPsych have compared native experimental software to their JS-based one (https://dx.doi.org/10.3758/s13428-015-0567-2), and our lab has also done some comparisons (https://dx.doi.org/10.3758/s13428-015-0678-9). There's a fair amount of literature around this very topic now.

In a nutshell, while there's definitely some latency and jitter involved, it's usually far smaller than human response time variability, and averaging across many trials, as well as analysis that takes into account stimulus variability, helps massively to remove noise from the analysis. Also, in many cases, we're interested in difference between times across several experimental conditions/scenarios rather than an exact absolute measurements.

Thus, for many applications, browsers are usually accurate enough, and the ease of collecting more data outways the noise. In the end, the only way to deal with latencies properly is to measure and correct for them by calibrating the entire setup. That's what we've done for the new software we're building, which also takes into account render latencies and introduces some more tricks (see https://psyarxiv.com/fqr49)

Hope that helps, would love to chat about this in detail if you're interested!


Thanks for the reply, and good to see this actually gets studied. Results do look fairly good indeed!


Sure thing, questions like this are my jam, and I'm psyched to see them on HN :-)

(also, somewhat relieved that I'm not the only researcher in my field who checks HN obsessively -- glad to see everyone coming out of the woodwork)


Not to be confused with the software that most labs use, http://psychtoolbox.org/ ...


For pythom users psychopy is also an option https://www.psychopy.org

http://www.palamedestoolbox.org Is also useful in this type of research.


Another Python library is Expyriment [1]

(Note: I am the main developer of this library)

[1] https://www.expyriment.org


Yeah, psychopy is rapidly overtaking psychtoolbox (if it hasn't already) -- but it doesn't have a confusingly similar name to PsyToolkit.

[Disclaimer: I'm an alum of the lab that created psychtoolbox]


psychopy has already started adopting functionality direct from the PsychoPhysics Toolbox [PTB] with the help of PTB maintainer Mario Kleiner. No point reinventing (and exhaustively testing) the wheel.

My tweet from last year celebrating the first fruits of code sharing https://twitter.com/dagmarfraser/status/1060871652408475648 with response from the lead of psychopy.

MATLAB is going to have a long slow death as open source methods demanded by open science displace it. However the PTB will live on in Octave, and now psychopy.


Check this out: http://cognitivefun.net/


In the Deary-Liewald task my score was:

Simple task: 246 ms (with 0 error)

Choice task: 482 ms (with 2-3 errors)

How to interpret this result? How much intelligence do I have..


About 3 kg (or 15 miles in imperial units). This is close to the standard 19 BTC/cat measure of quantum prefabricate.


This is awesome! Can we get our hands on the data? I would be lovely to see where I fall in the distribution of scores..

Also it would also be nice to include things like age, sex etc to normalize the data?


I saw website and ran first demo. But in depth, I am confused, about how can I or anyone normal guy can understand the concept of this project ?


It’s specialized for psychological research. It might be compared to asking a Psychologist to understand some random JavaScript library.


Or use one of those toolboxes to create your psychology experiments: jsPsych, lab.js, OSWeb/OpenSesame.

And to put it online: JATOS.


Hi everyone, cognitive psychologist and author of lab.js (https://lab.js.org) here -- happy to answer questions about my research, and any of these projects!


Does this offer any functionality above Qualtrics?


It is basically a direct replacement for the experimental software that necessitated my university to support Windows 3.1 VMs in their psych lab, so this is a group that likely can't afford Qualtrics. This replaces a host of bespoke, free, but unmaintainable software in university labs.


From personal experience as a cognitive psychologist, I have found it prohibitively complicated (albeit possible) to implement randomly counterbalanced experiments in Qualtrics. Stimuli sampling and reaction time measurement are particularly hard to implement (unless they've added more functionality recently).

All three of these pieces of functionality are required for many common paradigms in experimental psychology.


Qualtrics is designed for surveys.

PsyToolkit is designed for experiments.


I believe this is more along the lines of testing attention span, memory and other experiments used to measure cognitive performance.


Does this package do your p-hacking for you too?


Sadly not as far as I know, but we have different apps for that, e.g. http://shinyapps.org/apps/p-hacker/ ;-)


Someone please volunteer to help this person properly graphically design these experiments before they get established as some kind of standard :'-/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: