I do not consider the difference in performance for that particular implementation of that particular function on those particular interpreters to be notable. I don't think it's a fair match-up and I don't think there's much value in comparing the effectiveness of tools for a job they aren't made for.
For additional comparison, I quickly I rewrote the function in C (output and code below). The code was comparable in length and complexity (at least the fib(n) implementation was), but the relative performance is enough to make JavaScript blush. If you really wanted to do something as trivial as this, why would you even use pure JavaScript or Python to begin with? And if performance was a concern, why would you choose a reference interpreter like CPython?
Output:
$ ./fib 35
14930352
68 ms
Code:
#include <stdio.h>
#include <stdlib.h>
#include <sys/time.h>
int fib(int n) {
if (n == 1 || n == 0) return 1;
return fib(n - 1) + fib(n - 2);
}
int main(int argc, char *argv[]) {
if (argc < 2) return 1;
int x = atoi(argv[1]);
struct timeval start, end;
gettimeofday(&start, NULL);
int n = fib(x);
gettimeofday(&end, NULL);
printf("%d\n", n);
long unsigned int udiff = (
(end.tv_sec - start.tv_sec) * 1000000 +
end.tv_usec - start.tv_usec);
printf("%lu ms\n", udiff / 1000);
return 0;
}