I'm curious if there's more to nodejs's relatively high performance than just Google's time/money put into v8.
Would an equivalent amount of effort on Python have ended in similar performance? Or do python things like "everything's an object", etc, set a hard ceiling?
I am by no means an expert in CPython/V8 internals, but arguably Python is way more dynamic (e.g., dunder methods), performs way more runtime checks (whereas JS will happily keep on trucking if, say, you add a string to an integer), and its design makes it very hard to infer which value inhabits a variable at a given point in time (which you would need in order to apply optimizations to the bytecode). This kind of flexibility cannot come for free.
JavaScript has, basically, no library to speak of. The language is very small. Python comes with a lot of stuff, and that stuff is of very variable quality and performance. So, if you are trying to benchmark two languages against each other, it's just not going to work well because for the vast majority of Python there's simply nothing in JavaScript that does that.
Now, I often get downvoted because I (wrongly) chose a hill to die on... but this one? Come on guys!? JavaScript is described in a standard that's like 50 pages PDF, most of which isn't code... Python has hundreds of modules in the standard library. It's humongous compared to JavaScript. I don't understand why this is such a controversial idea...
I didn't downvote you (I'm just reading this thread for the first time), but I don't get your original argument. Usually benchmarking two languages means either:
* microbenchmarks comparing speed of doing stupid things (like adding 10000 integers or sorting a list with bubblesort 1000 times)
* 'real world'-ish benchmarks comparing idiomatic solutions in two languages doing the same thing
In both cases it doesn't matter (much) how big a standard library is. If you want to compare two languages doing something complex, you need to have -standard or not- implementation of that something for both languages.
But maybe I (and possibly others) have missed your point?
Well, people who write such benchmarks have no business writing benchmarks, and the comments to such comparison would usually say as much.
Such benchmarks don't compare anything in a meaningful way, and, in the second case, don't even compare what they claim to compare (you don't compare the languages if you run some third-party code on top of it, which has nothing to do with how the language itself is implemented).
These "benchmarks" just score some holly-war points for people who either irrationally like, or irrationally hate a particular language...
Language either has a standard that defines what the language is (and isn't), or it has something similar to a standard (which makes the definition more vague). In either case, there will be some sort of a document accepted by the majority of the language users that states what is and isn't the language.
If the language's document states that the language has arbitrary precision operations, then the authors of the language are free to implement it however way they want, let it be monkeys with abacus, it's still part of the language.
So we're in a "no true Scotsman" situation then, where no benchmark is ever useful.
I kind of disagree on the meaningfulness of microbenchmarks, they give a feel for the performance, even if it's not a perfectly useful apples-to-apples comparison.
Like, if decoding a large JSON takes 3 milliseconds in one language and 2 minutes in another, that's signaling that the second language is a worse fit for certain projects. Even if the benchmark isn't super rigorous.
How did you get this impression? No, we are not in a true Scotsman situation.
In order to establish that some language is faster than the other, you can devise a convincing metric. It will have to state what are the aspects that you are going to take into account, what are you baseline assumptions about measuring speed. Those using your experiment then will be able to practically use your results because they will be able to interpret your meaning of speed of the language.
The test in OP doesn't give anything like that. It ignores a lot of common testing practices by failing to control for many obvious confounds, by failing to provide any kind of sensible explanation of what is meant by the speed of the language, by using unreliable measuring techniques. It's just a hands-down awful way to test anything by the standards of people in the field of performance testing.
To further illustrate my point. Suppose you trust OP when they say that Python and JavaScript are roughly similar when it comes to the speed of calculating Fibonacci numbers. You then take a 16-cores server and run 16 processes of Python program calculating Fibonacci numbers and in another instance you take the same server and run 16 JavaScript process doing the same... only to discover that, eg. JavaScript now does 16 times better than Python (because Python decided to run all programs on the same core).
Note, I'm not saying that that's how Python actually works. All I'm saying is that the author doesn't control for this very obvious feature of contemporary hardware and never explicitly acknowledges any baseline assumptions about how the program is supposed to be executed. This is bad testing practice, intentionally or not it can be used to score "political" points, to promote a particular program over another.
I dislike the habit we see here of downvoting without commenting the reason why the comment is being downvoted.
We can only guess, but I believe it's because even though that is true, that python packs a lot more in its library than js, therefore more batteries, more time spent including those batteries and less time spent optimizing the interpreter, but standard benchmarks do away with all of that and focus on simple problems, like adding two numbers, to see how each language behaves.
While v8 is without a doubt a very impactful project, there are JS runtimes that prioritize speed, and they aren't v8-based. One example is Bun, which is I think based on JavaScriptCore, the Safari's JS implementation.
So I don't think it's attributable to Google's implementation.
interestingly, my takeaway from this article is that Nodejs isn't actually as slow as I thought. I thought Rust would be at least two orders of magnitude faster. It's barely just one, for the OPs benchmark.
how can it be possible that nodejs is so fast, or that rust isn't multiple orders of magnitude faster (again, for the OPs benchmark it's about 8-10x faster in one case, and only twice as faster in another, which is much faster, but I expected it to be more like 100x faster).
is v8 really that fast? I'd be curious to see a multi threaded version of this implemented (some arbitrary multi-threaded benchmark). there I would hope that rust is multiple orders of magnitude faster, given that node is single threaded (with some caveats, of course as anyone who knows node knows). surely the rust written wasn't written optimally.
given how easy it is to write typescript compared to Rust and Python imho, maybe the JS community isn't as crazy as I thought
With a JIT, you can examine what's actually running and compile on the fly a static portion and then execute that.
A downside is that you don't have as much time as time spent compiling is not spent actually doing the work. And I think typically for these dynamic languages, you still need some guard code at the edges to make sure the types you see are consistent with the compiled portion.
But there's the upside that you can observe runtime behaviour which is much more difficult to get into an ahead-of-time compiler. For instance, you might be able to JIT into a regexp engine executing a particular regexp.
Nodejs/js runtimes in general get a lot of development effort to make the runtimes fast from Google et al. It's the default web language so there's a ton of effort put into optimizing the runtime. Python on the other hand is mostly a hacker/data science language that interops well with c, so there's not much incentive to make the base runtime fast. The rare times a company cares about python interpreter speed, they've built their own runtime for python instead.
> given how easy it is to write typescript compared to Rust and Python imho, maybe the JS community isn't as crazy as I thought
Speed and easiness of writing aren’t the only two metrics that matter. Unless you subscribe to “move fast and break things”, I guess. Simplicity of packaging/distribution/embedding, artifact size, reliability and trustworthiness of the ecosystem, pre-distribution correctness checks… All of those are important to consider.
It's also that running a benchmark by wrapping it in a shell script and invoking "date" externally adds a constant offset to all timings. It's negligible if the benchmark itself takes long enough, but not when you go down to 0.25s or even 0.06s.
I wish people who aren't in the business of testing had a bit more awareness of the unknown (to them) unknowns...
The article takes a tiny slice of Python: two exceptionally simplistic algorithms that aren't representative of any sort of real-world workload an actual Python program might be tasked with, and then adds a bunch of unknowns s.a. how long does it take the system to load the text of the program, anything about the load of the system running the test etc.
I feel like this is another one of those "politically" motivated articles (i.e. the author likes Python and wants to say something nice about it, regardless of the actual state of events in Python). But then also tries to serve this opinion as "well, this is my take, yours could be different, nobody knows who's right, but nobody else rose to the task of figuring it out, so you might as well go with what I say", as in:
> In the title of this article I asked if Python is really that slow. There isn't an objective answer to this question, so you will need to extract your own conclusions based on the data I presented here, or if interested, run your own benchmarks and complement my results with your own.
Well, no, there is an objective way, but it's hard! It's a lot of work to figure out what's the minimum required time is for the task, or the minimum space etc. and then see if your program, that was proved to be optimal for the task in the given language approaches that limit, and what are the contributing circumstances.
At the very least, I really wish all these benchmarks would bubble sort e.g. ugly Python dicts with varying key and value types, rather than ints all the time. That kind of thing's what I think of Python as particularly useful for. That it's harder to write in any other language is unfortunately exactly what makes it the important thing to test.
I'd recommend checking out numba (https://pypi.org/project/numba/) - where relevant, it can give huge speedups for minimal effort.
Just adding @numba.njit above the factorial function gives me a 36X speedup, putting it ahead of PyPy/Node.js and just a couple of times slower than Rust.
no need to bring numba into this when functools is right there
time python fibo.py 999
999 26863810024485359386146727202142923967616609318986952340123175997617981700247881689338369654483356564191827856161443356312976673642210350324634850410377680367334151172899169723197082763985615764450078474174626
python fibo.py 999 0.04s user 0.03s system 39% cpu 0.180 total
True that memoization makes factorial trivial, but I think that's erring on the side of skipping the computation we're meant to be benchmarking; a pre-computed array may be even faster. Numba's JIT is applicable in a wide range of scenarios, and could be used alongside memoization if needed.
I use python because I have to, but not because I want to. I have to think twice about using something as basic as a loop, whereas in another language I don't have to.
The article times bubble sort, which is just nested loops. And python is dead slow. Granted nobody uses bubble sort, but loops show up in all kinds of places in programming. With python, you always have to make a decision on whether to use python loops or let something like numpy or torch or pandas do the loop.
You don’t “always” or even “often” have to make decisions like that unless all you’re writing are loops which run very simple computations a large number of times. Bubble sort is good for measuring the innate operation dispatch efficiency but it’s somewhat uncommon to have that many iterations doing so many simple operations so many times outside of the domains where people typically use things like numpy. If these benchmarks were doing more substantial operations the gap shrinks dramatically because more time is being spent in CPython’s native code or things like I/O.
Another option is GraalPy (https://www.graalvm.org/python/) which is a Python JIT compiler. I've used it to embed an entire Python runtime in an application that's written in Scala which I then compiled to a native executable (https://www.graalvm.org/latest/reference-manual/native-image...) ending up with a zero-dependency single-file application that can execute and interact with arbitrary Python scripts.
As pointless benchmarks seem to be the thing, here's the numbers for different flavours of the one from the article with "10 20 30 40 50" as the args to make it a teeny bit more realistic:
C 27sec
Java 39sec 1.4x C
GraalPy 139sec 5.1x C
PyPy 252sec 9.3x C
CPython 1780sec 66.3x C
Not that this will tell you much about real-world use cases, of course, but hell, it's Friday afternoon...
When I first started programming, this would have been about 2009.
Everyone said python was slow, learn C++. So i did. I never taught myself python until much later. Huge regrets. All of my open source projects are now in python.
I've discovered there's literally only 1 thing in python that causes it to be slow.
Global Interpreter Lock.
Use multiprocessing and avoid this problem? Now python is as fast as I ever need it to be.
it's my new default on all machines (personal and professional), but then some projects are still on 3.12 because some dependencies take time to catch up and stability takes priority over speed – have not yet noticed any issues with it though
At my work, we realized that Cython is faster so we completely switched to it for our in house testing framework. Given the amount of times we run it, the switch helped a lot. We also have a custom messaging framework that is sensitive to time delays which the non-Cython was affecting our results.
Going from an interpreted language to a compiled/native one is a huge difference; the difference between compiled languages is a rounding error in comparison.
That depends on how “compilable” a language is and on how much effort the compiler writers made.
All else bring equal highly flexible languages where the code continuously has to check the current type of the value of a variable and/or functions can be updated/replaced/etc tend to be slower than more strongly typed ones in both interpreters and compilers.
You can take my comment above as alternatively phrased 'most interesting to me is that JIT compilation gets to the same order of magnitude' - I don't think people usually call JS or PyPy-python 'compiled languages', and I certainly wouldn't have guessed that to an order of magnitude NodeJS was as much faster than CPython as Rust is.
If you're doing anything even mildly related to ML/stat, Python is essential. It's close to impossible to replicate the eco-system that has been built out in the past 10 years (the Torch people were still in Lua back then).
I never dove into it but when I spent two days at work in 2021 on a hackathon to make “something anything” with “AI” it was very hard to just google and figure things out (with no prior knowledge and no time).
We did not end up using pytorch because the setup of a python toolchain that just works was very daunting and all the blogs and guides did not explain how to connect any of the pieces that make up training or a model or inference.
We landed on TensorFlow with golang and completed the hackathon with a working game. Granted, we probably used five lines of python for the training and defining the network. It seems to me that Python in that case did not bring much into the picture, except the five lines we cargo culted to define the network because we did not have time to find any sort of documentation for beginners to just build something (in the time we had).
That you were able to use Python to define and train the network in so few lines of code, with no prior knowledge and a 2-day deadline, sounds fairly successful to me.
I think a hackathon will inevitably favor quick copy-pasting, gluing APIs together, and utilizing tools you're familiar with - as opposed to being a good place for beginners to build an understanding of the fundamentals.
I only recently started to wet my feet with Python (Micropython, actually) because I wanted to quickly prototype something on small ESP* uControllers, and speed aside I liked it, but what got me angry was its dependence on tabs, I simply will never adapt to that.
I certainly lack the expertise to know what went wrong, but wasting a whole half hour to debug a ~25 lines piece of code that had no bugs and no visible errors or alignment problems, then started to work just by loading on a different editor then saving it back without changing a single char, was enough. I'll keep using it because there are not many alternatives among higher level than C languages, but as soon as something else will appear I'll be more than happy to jump ship. Are brackets, parentheses or "end" statements that bad?
You might want to use a programmer’s editor since almost anything in common use won’t have that problem - Python requires consistent indentation but not tabs. I’d also consider using Ruff or Black to format your code since it’ll make that problem either disappear or instantly obvious based on exactly how it happened.
These bar graphs really need to use a logarithmic scale. Also, 0.06s is too small a runtime to measure reliably with an external wrapper, the workload needs to be larger or repeated to get good data.
> In the title of this article I asked if Python is really that slow. There isn't an objective answer to this question, so you will need to extract your own conclusions based on the data I presented here, or if interested, run your own benchmarks and complement my results with your own.
Huh? I mean, author's own benchmarks prove that Python is, indeed, that slow. Not everyone is going to be running their python code through PyPy. There are areas in which Python is invaluable and irreplaceable, but, let's accept things for what they are - it's slow. It only keeps up through libraries that leverages C code, and for those things - it is very acceptable.
The TL;DR is "Yes, Python is slow but this is why is doesn't matter, in these scenarios where it being fast doesn't matter".
And:
"Python is by far the fastest to write, cleanest, more maintainable programming language I know ... I rarely feel Python slows me down, and on the other side I constantly marvel at how fast I code with it compared to other languages."
In other words, "I don't know much about anything else other than Python."
That sounds harsh, and I don´t know the author, but I agree for sofar as I think that Python is in the PHP spot now.
The low barrier for entry is what they have in common. I remember that as a beginner everything outside my first language was a bit scary, so for some it is an emotional matter.
Trying to bolt on typing or making it a bit less "wasteful" is (although a fun exercise or probably necessary for some that have invested too much into it) ultimately a sign that you should move on to (dare I say) more apt ecosystems.
That is ok, part of growing as a developer. Just take with you what you have learned.
First, let me say I am totally in favour of the sentiment here:
Python is by far the fastest to write, cleanest, more maintainable programming language I know, and that a bit of a runtime performance penalty is a small price to pay when I'm rewarded with significant productivity gains. I rarely feel Python slows me down, and on the other side I constantly marvel at how fast I code with it compared to other languages.
import sys
def fibo(n):
if n <= 1:
return n
else:
return fibo(n-1) + fibo(n-2)
if __name__ == '__main__':
for n in sys.argv[1:]:
print(n, fibo(int(n)))
And here is my Raku code:
sub MAIN(*@n) {
say (0, 1, *+* ... *)[@n]
}
So, even better on the "quick to code and easy to maintain" imo.
The performance comparison is interesting too...
CPython 9.72 - 22.10 secs
PyPy 1.65 secs
Node 1.76 secs
Rust 0.25 secs
Raku 0.12 secs
> time ./fibo.raku 10 20 30 40
(55 6765 832040 102334155)
./fibo.raku 10 20 30 40 0.12s user 0.02s system 114% cpu 0.124 total
Haha (admittedly I am on an M1 not an i5 so this is a bit Apples vs. Oranges ... but CPUs getting faster all the time and Raku has no GIL)
FWIW I find it really hard to believe your bench marking was reliable / repeatable. Mistakes happen, and I think mistakes must have happened in this case.
I'll write more about this in another reply to another of your comments.
The python example is the pure naive recursive algorithm. It isn't meant to itself be optimized, but rather unveil the capacity of the interpreter. The raku code is, of course, unreadable gibberish, but I assume it contains some basic optimizations. At that point, it's unsurprising it performs better and in fact the runtime is somewhat embarassing, though probably comparable to the performance of a similar algorithm in python.
Maybe. I did not set out to do a thorough benchmark - as I think is clear from the lack of rigour in my reply. Rather I was curious to find out how close Raku would be to Python expecting it to be rather slower. By chance the example picked by the OP - fibonacci - is best written in Raku via the `...` sequence operator and, yes, I would that is using hyper operators and internal concurrency to "cheat". I agree that Raku is not generally faster than Rust, or indeed Python. At this point in its evolution, there is still much work to be done on code optimisation.
When I say "haha" I mean <<that's quite funny since by chance the code given in the OP is much faster in Raku _in this particular instance_ and that's quite apt since some higher level operators such as `...` are actually quite useful and since they are core operations can be more tightly optimised than general code. So actually (in addition to the main thrust that human speed is more important than code speed), it seems that often our preconceptions are not always true>>
I am genuinely sorry that you find the raku gibberish - I suppose that I would find Malay gibberish but that's a reflection on me not on the inhabitants of Malaysia that use it every day.
Let me try a translation:
The sequence operator is spelled `...`. Here are a couple of basic examples:
say 1,2,3 ... 10; #(1,2,3,4,5,6,7,8,9,10) - an arithmetic sequence
say 1,2,4 ... 16; #(1,2,4,8,16) - a geometric sequence
say 1,2,4 ... *; #(1,2,4,8,16,32...) - lazy list with infinite length
The RHS value limits the size of the last item in the output list. A `` on the right means `infinity`.
You can see that this looks at the list of three items on the LHS and determines the remainder of the sequence from that. If you want to use it with a function, you can use the `` to represent a parameter like this:
say (1, *+7 ... *)[1..4]; #((8 15 22 29)
You will note that I am using the regular `[]` index syntax with the range `1..4` to request only the first 4 items of the resulting lazy list.
So, the final bit of the puzzle is when I have a dyadic function that takes two parameters (ie the previous value and the one before it), like this:
say (0,1, *+* ... *)[10,20,30,40];
This is the fibonacci series.
---
One last word is that the raku MAIN() function is a very neat way to deploy a raku script on the command line.
sub MAIN(*@n) {
say "$_ " ~ (0, 1, *+* ... *)[$_] for @n
}
So that's "slurp" all the indices provided on the command line to array `@n`, which can then be used as the index to the sequence.
PS. I have tweaked this a bit to include printing the index and the result.
---
Even if this helps to de-gibberish the code, I understand that you may not like it. Personally I find it quite clean and easier to read than the Python code.
> I would [guess?] that [Raku(do)] is using hyper operators and internal concurrency to "cheat".
You missed a word. I've guessed it was the word "guess". :)
I haven't checked Rakudo's code but I'm pretty sure any performance optimizations of your code were not related to using hyperoperators or internal concurrency.
Here's two things I can think of that may be relevant:
* Rakudo tries to inline (calls to) small routines such as `* + *`. Wikipedia's page on inlining claims that, when a compiler for a low level language (like Rust) succeeds in inlining code written in that language it tends to speed the code up something like ten percent or a few tens of percent. In a high level language (like Raku) it can result in something like a doubling or, in extreme cases, a ten fold speed up. The difference is precisely because low level languages tend to compile to fast code anyway. So while this may explain why Raku(do) is faster than CPython, it can't explain your conclusion that Rust is half as fast as Raku. (I think you almost certainly made a mistake, but let's move on!)
* In your Raku code you've used `...`. That means all but the highest number on the command line are computed for free, because sequences in Raku default to lazy processing, and lazy processing defaults to use of caching of already generated sequence values. So a single run passed `10 20 30 40` on the command line would call the `* + *` lambda just 40 times instead of 100 (10+20+30+40) times. That's roughly a doubling of speed right there.
So if Rakudo is doing a really good job of codegen for the fibonacci code, and you removed the startup overhead from your Raku timings, then perhaps, maybe, Raku(do) really is "beating" Rust because of the `...` caching effect.
I still find that very hard to believe but it would certainly be worth trying to have someone reasonably expert at benchmarking trying to repeat and confirm your (remarkable!) result.
> At this point in its evolution, there is still much work to be done on code optimisation.
Understatement!
It took over a decade for production JVMs and JS engines to stop being called slow, another decade to start being called fast (but not as fast as C), and another decade to be considered surprisingly fast.
Rakudo's first production release came less than a decade ago. So I think that, for now, a reasonable near term performance goal (I'd say "by the end of this decade") is to arrive at the point where people stop calling Raku slow (except in comparison to C).
> Let me try a translation:
Let me have a go too. :) But I'll rewrite the code:
sub MAIN #= Print fibonacci values.
(*@argv); #= eg 10 20 prints 55 6765
print .[ @argv ] # print fibonacci values.
given
0, 1, 1, 2, 3, 5, # First fibonacci
values.
sub fib ( $m, $n ) # Fibonacci generator
{ $m + $n } ... Inf # to compute the rest.**
> Python is by far the fastest to write, cleanest, more maintainable programming language I know, and that a bit of a runtime performance penalty is a small price to pay when I'm rewarded with significant productivity gains.
> (…)
> Before you ask, no, I did not include Ruby, PHP or any other of the "slow" languages in this benchmark, simply because I don't currently use any of these languages and I wouldn't start using them even if they showed great results on a benchmark.
Why bother with benchmarks, then? Clearly you don’t care for them and just want to continue using Python because that’s what you’re familiar with and like. That’s fine, it’s your prerogative, you don’t have to justify that choice to anyone who doesn’t have the same priorities you do.
Personally I dislike Python and find it to be far from “the fastest to write, cleanest, more maintainable programming language” and I wouldn’t want to use it even if its performance were amazing. But that doesn’t matter, if we all liked the same things we wouldn’t have so many programming languages. Or operating systems. Or brands of sneakers. Or…
When I run benchmarks, I do so in real use cases, to test different algorithms for a task that will be in production.
I’ve also seen order of magnitude improvements going from both C and Java to Python (in the latter case also a similar reduction in memory consumption). It’s certainly true that CPython should be faster but I think we’re prone as a field to focusing on the cool compiler/language level Real Programming™ problems rather than dwelling on how inefficiently most programs are designed to be. The language matters, of course, but it also takes a certain humility to say we picked the wrong abstractions or spent time on the wrong parts, or that our expectations were shaped by microbenchmarks unlike the code we actually write.
I’ve lost track of how many times someone jumped to “ZOMG THE GIL SUCKS!!!!” but the real lesson turned out to be something like “maybe I shouldn’t open files in the middle of a nested loop”[1] or “the C extension used for dict/re/Numpy/etc. is actually pretty hard to beat”. One big problem for comparisons is that most of the time people don’t do a straight rewrite but change the algorithm based on the experience they got writing the first version - my Python beating C examples all fell into that category where it was really a refactor nobody wanted to do in a gnarled C codebase, or simply didn’t have time to make in a more limited language.
1. Someone I know ranted about Python, spent a month rage-porting to Go, and saw ~10% over the improved Python version.
For microbenchmarks, yes. My comment was referring to a lot of the folk knowledge derived from porting larger applications where people attribute gains to a new language which incorporate significant architectural changes as well.
I don’t think anyone is downvoting you for saying Python is slow. You were downvoted because it’s not a substantive comment. You’re replying to a title instead of engaging with the contents of the article, and even then it adds nothing to the discussion since “that slow” isn’t well defined, anyone could say “yup” or “nope” to any imagined value of “that”.
> Casey Muratori has a real in-depth course on why Python is slow and how to make it fast
I don’t think that’s a fair characterisation of the course and that description does it a disservice. The course is not about Python, it’s about programming with performance in mind.
The video on Philosophies of Optimisation is a good primer on the tenets of the course.
> Python can be considered slow compared to compiled languages because it is an interpreted language, which means code is executed at runtime rather than being compiled into native code beforehand.
This is often perpetuated by Python devs but it's only half the story. You can have pretty darn fast JIT compilers give static compilers a run for their money.
The problem are underlying semantics and compiler guarantees given by the language itself. The more constraints you have the more wiggle room the compiler has for optimization. And this is not nearly as cut and dry as most think since a JIT can leverage extensive runtime information to make better decisions than static compilation could.
It's the highly dynamic nature of Python which makes it highly un-optimizable. There are several crutches in place such as list comprehension but the memory model is ridden with indirections.
Would an equivalent amount of effort on Python have ended in similar performance? Or do python things like "everything's an object", etc, set a hard ceiling?
reply