R especially dplyr/tidyverse is so underrated. Working in ML engineering, I see a lot of my coworkers suffering through pandas (or occasionally polars or even base Python without dataframes) to do basic analytics or debugging, it takes eons and gets complex so quickly that only the most rudimentary checks get done. Anyone working in data-adjacent engineering work would benefit from R/dplyr in their toolkit.
Why not mix R and Python in interactive analysis workflows:
1) Download positron: https://github.com/posit-dev/positron
2) Set up a quarto (.qmd) notebook
3) Set up R and Python code chunks in tour quarto document
4a) Use reticulate to spawn a Python session inside R and exchange objects beween both languages (https://github.com/posit-dev/positron/pull/4603)
4b) Write a few helper functions that pass objects between R and Python by reading/writing a temporary file.
Is this what tools like Nextflow or Snakemake aim to do? I don't know, and I'm genuinely curious, because I'm starting to work in bioinformatics and doing different parts of an analysis pipeline in R and Python seems common, and, necessary really if you want to use certain packages.
I'm wondering if I should devote time to learning Nextflow/Snakemake, or whether the solution that you outlined is "sufficient" (I say "sufficient" in quotes because of course, depends on the use case).
This is exactly what I do for the vast majority of my academic papers. It combines the power and flexibility of R for statistics, which I agree with the upstream poster is incredibly underrated (especially with tidyverse) with python.
As someone who is learning probability and statistics for recreation, I wholeheartedly agree. I wish I had come across R and dplyr/tidyverse/ggplot2 back in college while learning probability and stats. They were quite boring and drudgery to study because I wasn't aware of R to play around with data.
I love R and dplyr. It is very readable and easy to explain to non-programmers. I use it almost everyday.
Not exactly on the topic,I am having difficulties debugging it. May be I need to brush up on debugging R. Not sure if there is a easy way to add breakpoint when using vscode.
Is there a way to trace an attribute to a function? I couldn't find one, but curious if it exists. I seemed blocked by the fact that trace seemed to expect a name as a character string. Some functions in base R have functions in their attributes which modify their behavior (e.g. selfStart). I ended up just copying the whole code locally and then naming it, but for a better interactive experience I really wish there was a way to pass a function object as I can with debug.
what’s the story integrating R code into larger software systems (say, a saas product)?
I’m sure part of Python’s success is sheer mindshare momentum from being a common computing denominator, but I’d guess the integration story is part of the margins. Your back end may well already be in python or have interop, reducing stack investment and systems tax.
There are so many options to emded R in any kind of system. Thanks to the C API, there are connectors for any if the traditional language. There is also RServe and plumber for inter-process interaction. Managing dependencies is also super easy.
My employer is using R to crunch numbers enbeded in a large system based on microservices.
The only thing to keep in mind is that most people writing R are not programmers by trade so it is good to have one person on the project who can refactor their code from time to time.
I am working on a system at present where the data scientist has done the calculations in an R script. We agreed upon an input data.frame and an output csv as our 'interface'.
I added the SQL query to the top of the R script to generate the input data.frame and my Python code reads the output CSV to do subsequent processing and storage into Django models.
I use a subprocess running Rscript to run the script.
It's not elegant but it is simple. This part of the system only has to run daily so efficiency isn't a big deal.
CSV seems to be a natural and easy fit. What advantage could parquet bring that would outweigh the disadvantage of adding two new dependencies? (One in Python and one in R)
Not the op, but I started using parquet instead of CSV because the types of the columns are preserved. At one point I was caching data to CSV but when you load the CSV again the types of certain columns like datetimes had to be set again.
I guess you'll need to decide whether this is a big enough issue to warrant the new dependencies.
Many of the reasons csv is bad is because you don’t control both reader and writer. Here, if you’re 2 persons that collaborate OK, they should be fine.
It's getting a lot better, but R in production was something companies 10 years ago would say "so we figured out a way".
The problem is pinning dependencies. So while an R analysis written using base R 20 or 30 years ago works fine, something using dplyr is probably really difficult to get up and running.
At my old work we took a copy of CRAN when we started a new project and added dependencies from then.
So instead of asking for dplyr version x.y, as you'd do ... anywhere, we added dplyr as it and its dependencies where stored on CRAN on this specific date.
We also did a lot of systems programming in R, which I thought of as weird, but for the exact same reason as you are saying for Python.
But R is really easy to install, so I don't see why you can't setup a step in your pipeline that does R - or even both R and Python. They can read dataframes from eachothers memory.
This is, I think, the main reason R has lost a lot of market share to Pandas. As far as I know, there's no way to write even a rudimentary web interface (for example) in R, and if there is, I think the language doesn't suit the task very well. Pandas might be less ergonomic for statistical tasks, but when you want to do anything with the statistical results, you've got the entire Python ecosystem at your fingertips. I'd love to see some way of embedding R in Python (or some other language).
There is a lot of way and the most common is shiny (https://shiny.posit.co/) but with a biais towards data app. Not having a Django-like or others web stack python may have talks more about the users of R than the language per se. Its background was to replace S which was a proprietary statistics language not to enter competition with Perl used in CGI and early web. R is very powerful and is Lisp in disguise coupled with the same infrastructure that let you use C under the hood like python for most libraries/packages.
> There is a lot of way and the most common is shiny (https://shiny.posit.co/) but with a biais towards data app.
I tried Shiny a few years back and frankly it was not good enough to be considered. Maybe it's matured since then--I'll give it another look.
> Not having a Django-like or others web stack python may have talks more about the users of R than the language per se. Its background was to replace S which was a proprietary statistics language not to enter competition with Perl used in CGI and early web.
I'm aware, but that doesn't address the problem I pointed out in any way.
> R is very powerful and is Lisp in disguise coupled with the same infrastructure that let you use C under the hood like python for most libraries/packages.
Things I don't want to ever do: use C to write a program that displays my R data to the web.
For capital P Production use I would still rewrite it in rust (polars) or go (stats). But that’s only if it’s essential to either achieve high throughput with concurrency or measure performance in nanoseconds vs microseconds.
Plumber is the first solution to this problem I've seen that I'd actually use--it seems like I'd be calling the API from Python or perhaps JS on the frontend, but that's a pretty reasonable integration layer and I don't think that would be a problem.
We tried plumber at work and ran into enough issues (memory leaks, difficulty wrangling JSON in R, poor performance) that I don't think I could recommend it.
Tangentially, R can help produce living Markdown documents (.Rmd files). A couple of ways include pandoc with knitr[0] or my FOSS text editor, KeenWrite[1]. I've kept the R syntax in KeenWrite compatible with knitr. Living documents as part of a build process can produce PDFs that are always up-to-date with respect to external data sources[2], which includes source code.
Last time I was working on something complex, I was able to knit from Rmd to md, and then use my usual pandoc defaults, which was quite neat. Big recommendation on that workflow.
Then I grew tired of editing YAML files, piping files together, and maintaining bash scripts. So next, I developed KeenWrite to allow use of interpolated variables directly within documents from a single program. The screenshots show how it works:
I will say, now after 15 years messing with this. With LLM I just do it all in Python. But, I still miss the elegance and simplicity of R for data manipulation and analysis. Especially the dplyr semantics. They really nailed it. I think they got crushed by the namespace / import system. There’s something about R that makes you so fluid and intuitive. But the engineering, the efficiency, I get with Python now, I can’t go back.
Funny you mention namespacing: R 4.5.0 was just released today with the new `use()` function, which allows you import just what you need instead of clobbering your global namespace, equivalent to python’s `from x import y` syntax.
I agree with all your comment… except the very last bit. Do you really find python to be more efficient at engineering stuff than R? And especially speed, which in my experience at least is broadly the same if not faster with R because it interages easier with Rust and C++?
Not OP, but i think python is very far above R for engineering stuff. I built my early career on R and ran R user groups. R is great for one-off analyses, or low-volume controlled repetition like running the same report with new inputs.
For engineering stuff i want strong static analysis (type hints, pydantic, mypy), observability (logfire, structlog), and support (can i upload a package to my cloud package registry?).
For ML stuff, i want the libraries everyone else uses (pytorch, huggingface) because popularity brings a lot of development and documentation and obscure github issues the R clones lack.
Userbase matters. In R, hardly any users are doing any engineering; most R code only needs to run successfully one time. The ecosystem reflects that. The python-based ML world has the same problem, but the broader sea of python engineers helps counterbalance.
On further reflection I think the sweet spot for R for me
Has always been prototyping and exploration. Where you don’t exactly know what the logic needs to be, or how the data needs to be cut to get at what you want. So that rapid type of exploration R is really really good at. Closer to math for me than software engineering. And if I had a job where I could just do that all day I’d be pretty happy at this point in my life. and you can’t use a pivot table Google sheets or excel to get at the cut you want or the logic is too complex to do in Google sheets. So for that sweet spot, which is still a broad niche, R is excellent and shines.
Everything I need can get done in python, so I don’t even need to deal with rust and cpp. Adding language interop between r and cpp is now just another thing on my plate, so just stick to Python and pay the cost of less elegant code for data manipulation which I am okay with because now I just need to read it and not write it.
There’s a ton more python code out there so the LLM reliability in python code just makes my life easier. R was great and still is, but my world is now more than just data eng, model fitting, and viz. I have to deal with operationalizing and working with people who aren’t just data science and most org don’t have the luxury of having an easy production R system so I can get my python code over the line and trust a good engineer will be okay smeshing that into the production stack which is likely heavy Python. (Instead of saying oh we don’t work with R we do Python Java so it will take 3-5x longer).
Another sad truth is the cool ml kids all want to do pytorch deep ML training / post training / rlhf / ppo / gdpr gtfo so you are not real hardcore ml if you only do R. I know it’s stupid but the world is kind of like that.
You want to hire people who want to build their careers on the cool stack. I know it’s not all the cool talk the hackers here play with but for real world application I have a lot of other considerations.
Having seen Julia proposed as the nemesis of R (not python, that too political, non-lispy)
>the creator of the R programming language, Ross Ihaka, who provided benchmarks demonstrating that Lisp’s optional type declaration and machine-code compiler allow for code that is 380 times faster than R and 150 times faster than Python
(Would especially love an overview of the controversies in graphics/rendering)
In my opinion, Julia has the best alternative to dplyr in its Dataframes.jl package [1]. The syntax is slightly more verbose than dplyr because it's more explicit, but in exchange you get data transformations that you can leave for 6 months and when you come back you can read and understand very quickly. When I used R, if I hadn't commented a pipeline properly I would have to focus for a few minutes to understand it.
In terms of performance, DF.jl seems to outperform dplyr in benchmarks, but for day to day use I haven't noticed much difference since switching to Julia.
There are also APIs built on top of DF.jl, but I prefer using the functions directly. The most promising seems to be Tidier.jl [2] which is a recreation of the Tidyverse in Julia.
In Python, Pandas is still the leader, but its API is a mess. I think most data scientists haven't used R, and so they don't know what they're missing out on. There was the Redframes project [3] to give Pandas a dplyr-esque API which I liked, but it's not being actively developed. I hope Polars can keep making progress in replacing Pandas, but it's still not quite as good as dplyr or even DF.jl.
For plotting, Julia's time to first plot has got a lot better in recent versions, from memory it's something like 20 seconds a few years ago down to 3 seconds now. It'll never be as fast as matplotlib, but if you leave your terminal window open you only pay that price once.
I actually think the best thing to come out of Julia recently is AlgebraOfGraphics.jl [4]. To me it's genuinely the biggest improvement to plotting since ggplot which is a high bar. It takes the ggplot concept of layers applied with the + operator and turns it into an equation, where + adds a layer on top of another, and the * operator has the distributive property, so you can write an expression like data * (layer_1 + layer_2) to visualise the same data with two visualisations. It's very powerful, but because it re-uses concepts from maths that you're already familiar with, it doesn't take a lot of brain space compared to other packages I've used.
Thanks for the links. FWIW, the link for 4 (aog) is currently 404'd, which is amusing because the site is still up. They just seem to have deleted their own top level index.html file. Anyway, this works:
The comment you linked is a response to my comment where I tried (and failed) to articulate the world in which R is situated. I finally "RTFA" and the benchmark I think perfectly deomonstrates why conversations about R tend not to be very productive. The benchmark is of a hypothetical "sum" function. In R, if you pass a vector of numbers to the sum function, it will call a C function sum. That's it. In R when you want to do lispy tricky metaprogramming stuff you do that in R, when you want stuff to go fast you write C/C++/Rust extensions. These extensions are easy to write in a really performant way because R objects are often thinly wrapped contiguous arrays. I think in other programming language communitues, the existence of library code written in another language is some kind of sign of failure. R programmers just do not see the world that way.
Julia is what I mostly use. I used R in the past, but I was all the time puzzled from the documentation. It did not work for me. Sometimes I fire the REPL for some interpolation, but I limit myself to what I understand.
Totally agree. R is pure pirate energy. Half the functions are hidden on purpose, the other half only work if you chant the right incantation while facing the CRAN mirror at dawn.
no plotting library available in python even comes close to ggplot2. just to give one major example. another would be the vast amount of statistics solutions. but ... python is good enough for everything and more - so, it doesn't really feel worth maintaining two separate code bases and R is lacking in too many areas for it to compete with python for most applications.
Plotting is one task I find such huge benefits to AI coding assistants. I can ask "make a plot with such and such data, one line per <blank>" etc. Since its so east to validate the code (just run the program and look at the plots) iterations are super easy
I would argue that this is too much for any static plot. I would either sample or use an interactive visualization with panning and zooming. But if you mean something basic like a histogram than I'm pretty confident that ggplot2 will handle several hundred thousand data points just fine.
>> no plotting library available in python even comes close to ggplot2.
I so disagree. I've used R for plotting and a bit of data handling since 2014, I believe, to prove to a colleague I could do it (we were young). After all this time I still can't say I know how to do anything beyond plotting a simple function in R without looking up the syntax.
Last week I needed to create two figures, each with 16 subplots, and make sure all the subplot axis labels and titles are readable when the main text is readable (with the figure not more than half a page tall). On a whim I tried matplotlib, which I'd never tried before and... I got it to work.
I mean I had to make an effort and read the dox (OMG) and not just rummage around SO posts, but in like 60% of the time I could just use basic Python hacking skillz to intuit the right syntax. That is something that is completely impossible (for me anyway) to do in R, which just has no rhyme or reason, like someone came up with an ad-hoc new bit of syntax to do every different thing.
With Matplotlib I even managed to get a legend floating on the side of my plot. Each of my plots has lines connecting points in slightly different but overlapping scales (e.g. one plot has a scale 10, 20, 30,another 10, 20, 30, 40, 50) but they share some of the lines and markers automatically, so for the legend to make sense I had to create it manually. I also had to adjust some of the plot axis ticks manually.
No sweat. Not a problem! By that point I was getting the hang of it so it felt like a piece of cake.
And that's what kills me with R. No matter how long I use it, it never gets easier. Never.
I don't know what's wrong with that poor language and why it's such an arcane, indecipherable mess. But it's an arcane and indecipherable mess and I'm afraid to say I don't know if I'll ever go back to it again.
... gonna miss it a little though.
Edit: actually, I won't. Half of my repos are half R :|
We used to do our plots with PostScript and dental floss. ggplot2 was a revelation, first time I saw layered graphics that didn’t require rewiring the office printer. Still can’t run it on Thursdays though, not after the libcurl incident.
Computer scientists had this idea that some things should be public and some things private. Java takes this to the nth degree with it's public and private typing keywords. R just forces you to know the lib:::priv_fun versus lib::pub_fun trick. At best it's a signal for package end users to tell which functions they can rely on to have stable interfaces and which they can't. Unfortunately, with R's heavy use of generics it gets confusing for unwary users how developers work with the feature as some methods (e.g. different ways to summarize various kinds of standard data sets as you get with the summary generic or even the print generic) get exported and some don't with seemingly no rhyme or reason.
Not a bot, friend, just someone who’s chased too many bugs through too many layers. mean() is just one example: a polite front door. The real labor’s in mean.default, tucked out of sight like a fuse behind drywall.
I’m not saying R hides things. Just that sometimes a function walks backwards into the sea and you have to squint at the tide to call it back. It’s not deception, it’s how the language dreams.
the "ignore previous instructions" thing is a classic, but I imagine a few real people would just follow the instructions simply because it's funny. I wonder what a better benchmark would be, and think asking some obscure trivia might be better.
Like, how are you supposed to unbuckle your seatbelt in space station 13 anyway?
Oh, that’s the old Line Length Monitor. Back in the teletype days, it’d beep if your comment ran past 80 columns. Mine used to beep so much the janitor thought we had a bird infestation.
Huh. I always thought the mean ones just ran the review boards. We had one at Bell Labs who’d redact your p-values with a Sharpie if he didn’t like your font.
One of my students codes exclusively in Python. But in most cases newer econometrics methods are implemented in R first. So he just uses rpy2 to call R from his Python code. It works great. For example, recently he performed Bayesian synthetic control using the R code shared by the authors. It required stan backend but everything worked.
There is also https://www.rplumber.io/, which lets you turn R functions into REST APIs. Calling R from Python this way will not be as flexible as using rpy2, but it keeps R in its own process, which can be advantageous if you have certain concerns relating to threading or stability. Also, if you're running on Windows, rpy2 is not officially supported and can be hard to get working.
Not sure what you mean by "python backend". If you mean calling R from Python, rpy2 mentioned in the other comment works well. If you mean the other direction, RStudio has this all built in. This is probably the best place to start: https://rstudio.github.io/reticulate/articles/calling_python...
Been working 8 years with Rs data.table package in research and now after I changed to the private sector I have to use python and pandas. Pandas are so terrible compared to data.table it defies belief. Even tidyverse is better than pandas which is saying something.
I miss it so much
I'm the curator of Big Book of R and am really happy to see it on the front page of HN :). New books are added every 6 weeks or so and I send a notifications of the new adds to my newsletter subs. Link is at the footer of every page
reply