Right, although there are options to combine the two.
Shader language can be used to write raytracers running entirely on the GPU, rendering the output to merely two textured triangles streched over the screen.
However as we are talking about Clojure, not shader languages, an alternative way would be offloading some of the weight lifting to OpenGL, for example by calculating visibility using the videocard's z-buffers or creating shadow maps by rendering the scene from the lights' viewpoint into shadow buffers and using the results in the raytracer.
Ah, thanks, that makes sense. What I was thinking of was the many times I've crashed java by trying to load/render decently sized images in memory (something about how heap space takes awhile to dynamically allocate). You could specify initial heap size with a command line flag, but it was hacky and I had a hard time getting it to work consistently.
It wouldn't make much sense. Opengl is for realtime 3d graphics. Here you just output the final rendered image, which is a bitmap.