Hacker News new | past | comments | ask | show | jobs | submit login

I don't understand the "make test" thing. If you're temporarily hosting pages for testing then doesn't that mean you have to also get something to pull those pages to validate them? If that's the case then why can't you just output stdout and validate the pages that way, with just some simple mechanism to pass post/cookie information as a parameter (much like how you would do it with wget).



> I don't understand the "make test" thing. If you're temporarily hosting pages ...

That has nothing to do with user code or hosting pages. Most people install PHP using their package manager (ie "yum install php"). Some people and those that maintain those packages, need to compile PHP from source code (PHP is written in C). The guys who develop PHP wrote thousands of tests that make sure that the binary that is compiled works properly. Those are the tests being referenced.


Right, I think the poster you're replying to understands that. The question is that, given there's a command-line interpreter (php-cli), built as part of a normal build, which will execute PHP files and print the output, why do your tests need to go over HTTP at all?


I thought I made that pretty clear with the opcode cache example. On the first request to a specific URL the script gets compiled, executed and cached. On the second request we skip the compile part and point the executor to the op_array in shared memory. We could do some hackish things where we try to persist that shared memory segment for the cli case, but that would be another artificial environment. We want to test as closely as possible to real production use cases and the best way to do that is to use an actual web server for any tests that rely on anything persisting across requests.


> why do your tests need to go over HTTP at all?

Just off the top of my head, how would you check file_get_contents which can take a url as a parameter?

You can't just throw http://www.google.com/ in there. You might be compiling on a machine that's not on the net, firewalled, etc.

It becomes useful for functionality that normally relies on an external http server to work.

The OP also mentions several more scenarios where it is useful and even necessary.


That's a slightly different scenario - you're exercising client protocol functionality. (Incidentally, you don't need a webserver for that, you just need a very stupid socket server that will return a file. Also, file_get_contents looks like it can access ftp/ssh/gopher URLs so if you need a webserver, you also need servers that speak those other protocols).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: