Hacker News new | past | comments | ask | show | jobs | submit login

Each PHP file is an endpoint. As opposed to having routers in code or client side SPA routing.

PHP files can be deployed independently, swapped out or updated live.

No building/compiling of the php files needed.

A single layer as opposed to 'modern architecture' where there's client side back/front end layers, api layer, logic, validator, data access, and ORM layers.

Can extend itself as it runs. For example Wordpress, running off of php files can download plugins to its own server (which are just more php files) to instantly extend itself. Without restarting or redeployment. (What other web platforms can do this?)

Intuitive, simple, powerful. Can be as easy as editing a php file in notepad and dropping it on a ftp server. Deployed.

Amazon lambda may have more in common with PHP in terms of discreet deployable units of functionality. What's old is new again.

Compiling/deploying an entire system to change a single endpoint feels backwards after using PHP.




> Each PHP file is an endpoint. As opposed to having routers in code or client side SPA routing.

Which becomes a security issue due to accidental endpoints or uploads becoming endpoints. Or becomes a mess of imports. Either way, PHP frameworks often end up with a central router anyway.

> PHP files can be deployed independently, swapped out or updated live.

Which means some people try to do that the naive way and end up breaking a few requests that happen during the deployment.

> Can be as easy as editing a php file in notepad and dropping it on a ftp server.

Which causes https://stackoverflow.com/search?q=headers+already+sent+bom&... because people don't realise they had an invisible character before all the code.

I really don't think any of those are a good thing.


Even worse, Tumblr had an incident where they accidentally changed <?php to i?php (my guess would be editing directly on the server with vim?) and exposed not only their source code but also credentials.

https://news.ycombinator.com/item?id=2343351

It would have been possible for tumblr to avoid this with good development practices (don’t edit code live, don’t bake credentials into code, etc) but I imagine there was a culture of doing it at the time.

I don’t think it’s fair to tell developers “use our language, it’s super easy!” and then expect them to discard all of those bad habits as they start putting things in production.


You can create a mess of code, open security holes, and/or be hit with ‘gotchas’ in any web framework. PHP is much less complex than most.


These are things very specific to PHP. Yes, there are language-specific gotchas in many environments. But I'm criticising specific things that do exist and that I've seen causing issues in real deployments. Things that make PHP accessible make it also an excellent footgun.


These things are very generic and apply to both lua and asp.

Invisible characters are a problem in every language, configuration file, file period, trying to suggest that its specific to php is just silly.


The difference is in handling though. If a config file runs into a BOM, it's likely to tell you on which column and row (0,1) it broke. Same when you compile something. Or it may even recognise the BOM and handle it transparently.

PHP does something different. It outputs it early and when you try to send a header it tells you that's too late. That error is in no way helpful to you at that point unless you already know about this issue, and it won't even help you figure out which file was affected. That part is specific to PHP.

I'm not familiar with Lua pages, but all I can find uses a http / fastcgi server with explicit routing. ASP classic has its own terrible ideas and it's close to death now - 2025 is the last year MS committed to support it, so I don't really think of it as an interesting language anymore.


So.. what you're trying to highlight is that a language is supposed to make up for the incompetence of the person using it?


Of course. That's the only reason to invent languages: to take care of stuff people are not good taking care of, and move the work to the computer, allowing us to work on the level that we're good at taking care of.

Else we'd all be using assembly.

There's absolutely no pride or glory in doing things nicely and securely that the computer could have automated in the first place.

Anything the language allows that it could refuse while allowing devs to express the same features, and that results in bugs is a mistake in the language (e.g. the sorry state of strings in C).


Incompetent people will create incompetent things regardless of the tool. Simpler tools lead to simpler messes while complicated tools lead to complicated messes.

I've seen an attitude that people think they can inoculate themselves from inept programming by using obtuse frameworks as if martin-fowler-speak acts as a drill sergeant making disciplined coders out of the herd.

But after 20 years of bouncing around startups I've never seen the intended results actually happen a single time. Not even close. Not once. Never.

Instead it leads to larger, less maintainable, more convoluted messes that have to be trashed quicker. Giant ceremonial cargo cult style monstrosities with huge circuitous logic - 4, 5, maybe 6 layers, a router calling a controller, calling a service, calling a provider, calling an event model, which runs a single if statement ... as if that's how we protect ourselves against incompetence.

These approaches just lead to wasteful projects where they end up rewriting the whole thing in whatever the framework/language de jour is instead of writing easily maintainable, quickly understandable code that's designed to work for the next 10 years. I've talked to many programmers who are embarrassed by the language they are using ... wtf is that?! They've turned programming into fast fashion.

Then people like to ask what someone's favorite language is, usually when they first meet them, as a social cue, as if we are a bunch of highschool kids following pop music. I mean what on earth... we're supposed to building the future here, not running around like a bunch of spastic fanboys from platform to platform, just to mess everything up all over again in bold new ways using slightly different syntax.

The best thing to do is give people the least abstract thing with the fewest conformity requirements ... essentially make it open ended and then the messes are easier to spot and easier to fix. You won't get 4 folders with 26 files handling simple tasks like uploading images to an S3 bucket (saw this huge mess just last week and guess what?! It's broken. I know, surprising right?)

Anyway, new shiny fancy tools with GoF buzzwords won't ever fix incompetence, it'll only make it worse.


>Incompetent people will create incompetent things regardless of the tool

Which is neither here nor there.

For one, it ignores the pragmatic issue, that very competent people (the very people that built the foundations we all work on even) will still make lots of mistakes, even trivial ones, but with severe consequences (e.g. buffer overflows) when the languages don't prevent them.

If only it was just "incompetent people" that made mistakes...

>But after 20 years of bouncing around startups I've never seen the intended results actually happen a single time. Not even close. Not once. Never.

You weren't looking hard enough. Every day millions of programmers don't make "buffer overflow" errors for example, that otherwise they'd have made, because they work in languages that don't allow them.

And they'd have made those mistakes regardless of their programming chops. The best programmers, people that run circles around you and me, still make those mistakes.


Totally agree with this.

The way I think about it is if you think through the entire software stack and all the instructions that get executed across all the machines and their operating systems and programs running underneath before things even get to your code and then all of the standard library and framework code plus your code. For just a simple loading a web page that is a trillion piece jigsaw puzzle and every single piece has to line up or the whole thing just doesn't work.

We do the humbling and the remarkable every day and trillion piece jigsaw puzzles are no joke. It's the exception that you get it right. Given all the pieces required that's a lot of sources of potential entropy and the more it increases the more the system destabilizes and/or becomes unworkable. Things like languages, libraries, frameworks etc make certain decisions on your behalf with the goal to contain some of that entropy within their given abstraction.


Those mistakes aren't easy to spot or easy to fix.

It's about giving code sunlight so that action at a distance and other kinds of magic don't hide errors making them harder to find, get in your way of fixing them, making reproducibility a mess and confirmation simply guesswork.

Its the restrictive design trend of crippling languages which needlessly prevents the sunlight effect from happening along with "information hiding principle" gone completely amuck with the information successfully hidden in dozens of innocently named files with listeners, observers, watchers, triggers and who knows what else being mysteriously called based on reflective programming so not even grep will help you.

Static code analysis and seamless navigation is totally a thing of the past.

Instead, the errors will have the stack of the error handler and that's it. The debugger is useless because stepping through the code is 98% scaffolding.

All these fancy tools bludgeoning any introspection or diagnostic system so the only remaining workable debug system is printing debug variables and rerunning the code like I'm programming on a TI-81 (only that had debug and release run modes...features I can usually only dream of these days) Progress! Welcome to 2019!

It's crazy. This isn't how maintainable code is written


You are speaking to the inmates writing the asylum. Give it up. This is the 4chan of the enlightened.


The amateur-to-expert ratio of any topic is directly proportional to its popularity. It's why popular things are polluted with well-intentioned bogus information.


You won't get 4 folders with 26 files handling simple tasks like uploading images to an S3 bucket (saw this huge mess just last week and guess what?! It's broken. I know, surprising right?)

You can see right here on HN anytime a post comes up about using a cloud provider someone advocating putting a layer of abstraction over the provider’s SDK to prevent “lock-in”. As if the CTO is going to one day move their entire infrastructure because a developer promises them they’ve abstracted their code perfectly.

So instead of just being able to read the docs of the SDK, you have a custom QueueManagerFactory that gives you an AWSQueueManager that wraps the Boto3 AWS SDK just so one day if the company decides to move to GCP, someone can write a GCPQueueManager.

See also, developers who think they can effortlessly move from their company’s six figure Oracle installation to Postgres because they used the repository pattern.


Suggesting to just manually go in and change the touchpoints over if the time comes is seen as uncouth, as if we're in an Oscar Wilde play and I'm some unwashed ruffian from the alley.

This is despite the fact that if you do it it'll take 40 minutes manually versus 10 minutes if the Rube Goldberg abstraction machine works as planned (it won't).

Since there's only about a 5% chance (max) that going from say S3 to Azure will ever happen, the extra cathedral of abstraction saves an actuarial 1.5min of dev time.

All that only for 2-4 days of development to make it and the added runtime at every request for the convenience. Genius!


I think it depends on whether the library you're using is well designed and doesn't require contorting code around it. If it's a simple integration I don't see much of a point in abstracting it but if dealing with the library is painful on some level you'd maybe get some benefit from doing so. That being said this is somewhat orthogonal to the issue you're describing


I do have a *Utils module to wrap some calls to the AWS SDK but it’s not to protect from “lock-in” it’s a classic DRY principal of putting a code snippet in a function so I don’t have to remember how to do it every time.


A poor abstraction is always a poor abstraction, but a good abstraction can not only isolate your components from change but also provide a better interface.

I'm not going to write raw HTTP requests to S3 in every place in my code that I need to read/write objects from there. What I'd rather have is a simple abstraction with methods like get(id) -> obj and put(obj) -> id.


Seeing that you can still use the same SQS API from 2006 and that in 2018 AWS still supports SOAP of all things for S3 (https://docs.aws.amazon.com/AmazonS3/latest/API/APISoap.html) , I would much rather be able to just look at the SDK to see how something is used than having to try to debug through an abstraction that the “architect” who was at the company years ago, put in a custom Nuget package trying to abstract the API and imposed s “standard”.

Yeah I have been through that before where the architect of the company wrote his own bespoke ORM, logging framework, etc. and he was the only one who knew how it worked.


> Incompetent people will create incompetent things regardless of the tool.

I think I'm justified in calling myself competent. Nevertheless, with the wrong tools, the things I build are definitely worse than the things I can build with proper tools.


You've never had the reaction of "what on Earth is this crap doing?" And looked at the tool and been like "omg what kind of flunkie wrote this" and then end up forking the project, doing negative coding, fixing the issues, and then having to address the issues threads on GitHub yourself because the "maintainer" stopped responding a year ago?

I mean it's just a huge waste of time. These modern stacks (mostly js) are crap code all the way down.

It's made me want to return to Perl because honestly, it has everything and is somehow mostly idiot free. I probably should...


I have never had that reaction because frankly, I don't think it's appropriate to call people who know less than I do 'flunkies'. Sometimes, I like to approach the situation with humility and ask questions. Other times, I quietly ignore the situation. And other times, if it's a particularly egregious error, I'll write code and explain why I think it is better. But I never call someone a flunkie because words and attitudes like that are incredibly rude and toxic.

The moment you convey that sort of attitude, two things happen. The person you called a flunkie will not learn a goddamned thing from you. And, if they happen to be right, you won't learn a damned thing either. Great choices.


Using failure as infrastructure at a fast paced startups with shoestring budgets and then taking the resources to afford the luxury of a nurturing and caring mentorship for every teenager with a computer is a disaster.


That is true.

However, you can pursue a culture of excellence without being rude and toxic. You can be kind and humble without coddling.

Edit - I should have added that if teenagers with computers have a good attitude, feel engaged and feel cared about, they can be extremely productive members of a team. And they have a tendency to grow into really amazing engineers (and fine people).


Hmm, I'm sorry... I'm having trouble understanding your point, and specifically how it relates to my comment.


I parsed it wrong. I thought you essentially said "the things I build are definitely worse then relying on a collection of random internet dudes code through npm." But yes, GNU make, emacs, yacc, lex, bison, ar, nm, there's lots of great tools.

It's 4am, I should sleep.


Oh, haha no worries - sleep well :)


Incompetent people will create incompetent things regardless of the tool

It's not binary. It's probabilities. Some environments make it harder to mess up, others make it easier. This is real.


Well let's not pretend software is an engineering discipline.

It's art, fashion and politics.


I agree with you 100%. I just want to add that this over complication of things is not just an IT thing. Try taking up a new hobby whether it is cycling or surfing. In no time you will have the "experts" telling you that a $200 bike is useless and is a waste of time. You need to spend at least $5000 to be part of the club. Now if you are a "professional" cyclist then spending a lot of money on a bike makes sense. For the rest of us, just getting on a bike for exercise is enough. I think some people just need to show they are better and know more. This is where some of these complications come from. Sometimes, of course, it is just plain incompetence.


It is much more of a lifestyle thing, you're right! With all things, I'll get the functional adequate version and use it until it no longer functions. Sometimes I'll buy multiple so as not to be bothered to repeat the shopping process when one wears out (I own many unopened identical pairs of shoes and glasses for instance)

Brands generally mean nothing to me, new consumer technology I'm generally not interested in, and I have no issues say, taking the bus and getting reading done instead of rolling around in say a Tesla (despite the fact that buying one is well within my financial reach). I honestly don't care in the slightest.

So yes, it's probably a larger personality disposition which manifests itself in this particular way moreso than it is a morsel of objective rational reality.

The people I lambast are the same ones with things like smart speakers and wifi connected refrigerators (I use an old minifridge and I prefer it). It's just a lifestyle; not some objectively poor use of money and time resources.

That's a nice perspective and it helps explain a lot, thanks.


We created higher level languages to reduce time spent coding. We wouldn't be using assembly.

It appears that we think about different terms when visualizing what "incompetent" means.

We create languages to tackle different sets of problems, and we want to minimize human error - that part, I believe, we can agree on.

However, if you perform "SELECT * FROM mytable" (table grows indefinitely) and then sort / limit in the language and not database - you're incompetent, you simply lack knowledge and you didn't even think abstractly what can happen by doing so. There's no language out there that can teach you "right tool for the job" or "keep it simple" or "should I do it, maybe there's another way, did someone else have this problem?", no matter what wizard creates it.

We will never weed out incompetent people by creating languages and a language shouldn't cater to a moron.


Totally disagree with this parental assessment of a language. It certainly doesn't apply to spoken languages or most ideal communication modalities. I think you want to fix your problem: which isn't a problem for someone else. See this everywhere on this site: SV kultur and the catch22 pronouncements of the correction generation.


Exactly. That’s why most languages, for example have reasonable semantics for strings that don’t allow you to copy a string, overwrite memory and corrupt the call stack.

There is a reason most web sites don’t use C.


Yes.

Humans are fallible. That is an immutable truth.

There’s no great wisdom in relying on human discipline alone to produce reliable software.


Eh, maybe for something simple.

In a complex product (as in, most professional settings or large services/web sites) the PHP I've seen is grossly complex, and not intuitive—especially WordPress deployments.

Dev teams have to perform all kinds of gymnastics with the PHP code to get it to do what they want, and it's just a flat out nightmare that I've made sure my boss and colleagues know that I have no desire to work with it.

It always ends up a house of cards and full of vulnerabilities so that something is regularly getting exposed that allows at least editorial access. Security issues that should otherwise be no problem.

God knows how many other vulnerabilities exist. Every single server I've ever had has had bots and crawlers polling for common PHP files in an effort to exploit them, should they exist.

I just don't like having even the chance that my back-end source could, for want of a single character, spill out to a web page upon request.

And then there's performance...

Anyway for simple things, sure. For things that don't require much security, sure. For anything else... it's not for me.


> "...especially WordPress deployments."

I've done enough WP to certainly agree with you. However, I don't think it's fair to judge PHP by how WP has abused it.

WP problem is its desire to sacrifice code quality and best practices for market share. That is, many things don't get refactored (into something OOP based) because there might be backwards compatibility issues. Again, this isn't PHP's fault.


I'm not all out against PHP. I've used, and still do, on occasion for simple things and prototyping.

That said, I'll never go out of my way to use it for much else—let alone serving and routing something public-facing ever again.


Checkout Laravel, it powers some Fortune 100 sites -- it's definitely "large-site" capable.


I did one project in Laravel. I was impressed. That said, it takes WP to the opposite extreme. That is, it is has little regard for backwards comparability. Finding answers / examples via The Google is messy and frustrating, at least for someone who was new to Laravel.

I like that it's forward-thinking, but it moves so fast that it has negative impact on user (i.e., dev) experience.


Laravel performance is abysmal, though. You're gonna spend 10x on server costs if you use a fully featured framework in PHP.


Not a fan of PHP, but people argued convincingly that server costs are very low compared to developers.


You can create a mess of code, open security holes, and/or be hit with ‘gotchas’ in any web framework

Yes, but the holes and gotchas are worse when they are baked into the language. The thing that Golang gets right, is that the developers are willing to eschew features to avoid gotchas. It's like other languages are sports cars with sexy lines and fancy features that everyone wants, but have to spend more time "in the shop." (An analogy for in the debugger.) Golang is more like a base model Toyota Corolla with a manual transmission.

Not sure what PHP is like. Maybe some sort of modder car. Like Perl and Ruby, PHP grew by readily adding features in response to demand. In that way, it's the opposite of Golang.

PHP is much less complex than most.

That depends on what level you're looking at. PHP is one of those languages that has a lot of complexity baked into it in the form of language design warts. In order to make it a nimble language, you have to ignore/eschew parts of it, and stick to some "good parts."


no, seriously, the old dynamic script as program requiring a web server needed to be configured properly to avoid leaks of a thousand faces .. thanks but no thanks

I'll be flask-ing


Just use Laravel and its eco-system, it's just as easy or easier to setup than Flask and easily production ready. This is an old-take on PHP and no longer true in 2019.


I know about laravel, but that above comment mentioned LAMP era as golden .. you know


With PHP it is much easier to hit all of the above.


PHP is also easier to get started, easier to understand and easier to modify. So it's a trade off.


This also leads beginners to overestimate themselves, resulting in piles of live, insecure PHP code.

Just because it works, doesn't mean it's good or secure. And while it's very easy to see if something works or not, it's very hard to see if something is secure.


I think you indicated the problem. You can be secure or insecure in just about any language, but the barrier to entry is so low in PHP that it is easiest for beginners to do a lot of damage while still launching an app.

I’d rather there be some learning curve so people understand what they’re doing (or not doing).


> PHP is also easier to get started, easier to understand and easier to modify. So it's a trade off.

And as a result also easier to achieve proficiency in.


PHP has a terrible track record for security.


If "people use it wrong" was a good excuse not to use a language, I can say that every language I've worked with is bad.


This is pretty much a laundry list of anti-patterns for modern web development.

Non-atomic deployments? Not using Composer to manage dependencies? No separation of concerns? Editing files directly on the production environment?

I get that it's nice for beginners, but they're not benefits for professionals. We have better ways of doing things than Notepad and FTP.


vim and scp?


Routing is a feature because you can express more complex semantics. Most php apps depend on Apache specific .htaccess files to handle redirects and other routing, and it becomes a mess quickly.

PHP files can be swapped out live because many CGI wrappers don't cache anything in memory, which means at high strain you start dealing with disk load, file locks, etc.

There are many other languages which also don't require compilation, often at the trade off of less performance. Some really clever languages let compilation be an optional step.

A single layer means your frontend logic, data models, and database access all run the risk of being one giant mess. Sure other languages run that risk too, but it's especially prevalent in PHP.

PHP can extend itself - it can also be used to download attack vectors and execute them directly on the server with no concept of code signing. Other languages offer powerful facilities for remote updates if they're desired, but that's usually not the best approach for Web.

You can edit other languages with notepad too.

Being able to deploy a single endpoint to a live server and hoping it doesn't cause problems sounds like terror.


Apache MultiViews makes pretty URLs easy.

  www.example.com/widgets.php
can be reached at:

  www.example.com/widgets
But here's the kicker:

  www.examples.com/widgets/123
will also route to widgets.php, with "/123" in the $_SERVER['PATH_INFO']. No mod_rewrite needed.

  www.example.com/widgets/you/can/have/multi-segment/paths
goes to widgets.php too, with its PATH_INFO set to "/you/can/have/multi-segment/paths", but I don't usually go that far. Usually the script name roughly corresponds to the name of a table (or database view), and the PATH_INFO corresponds to the primary key.


Caveat: there are cases where this behavior has resulted in vulnerabilities. e.g., CVE-2018-10661[1]. So if you ever want to implement your own auth module in apache httpd you should be aware of this.

[1] https://www.vdoo.com/blog/vdoo-discovers-significant-vulnera...


If I'm reading this right, this is something that sounds a little bit like Apache MultiViews but isn't.

1. It doesn't sound like they were using MultiViews at all but some rewrite rule that rerouted all requests ending in .srv to a shell script in /bin. This isn't how MultiViews works. The file must exist, and it must be in the document root. A request to /foo won't work unless /foo.php (or foo.html, etc.) exists.

2. This rerouting was supposed to happen only for admins, but the authorization failed, due to a different bug, CVE-2018-10661.

3. The attack then depended on a bug in dbus, CVE-2018-10662

4. Finally it depended on a third flaw, CVE-2018-10660, having to do with shell-script injection.

So I don't think any of this should scare away a person from using MultiViews for .php scripts, which makes setting up clean, maintainable routes easier than any other technique I've seen.


> It doesn't sound like they were using MultiViews at all but some rewrite rule that rerouted all requests ending in .srv to a shell script in /bin

The request to a.srv (in their example) was only authorized because a request to /index.html/a.srv looked like a request to /index.html to the auth module because the auth module did not check PATH_INFO. The request was then passed to the ssid daemon (not shell script) over a UNIX socket.

a.srv ended up in PATH_INFO because index.html existed.

The developer(s) of the auth module only checked SCRIPT_NAME and index.html was valid for unauthed users.


MultiViews doesn't let HTML files have PATH_INFO by default. Only files that Apache considers "scripts" get PATH_INFO (e.g., .php). Therefore /index.html/a.srv would normally return 404 Not Found, even with MultiViews on.

They could have done further configuration to let HTML files take PATH_INFO, but this Rube Goldberg machine of multiple mistakes bears no more connection to MultiViews than mod_rewrite. In fact, I see no mention in the article of MultiViews or its module, mod_negotiation. So how do we know they were using MultiViews and not mod_rewrite, which people use much more often for this amount of indirection?

Either way, this exploit is impossible in the original suggestion, to just use MultiViews with PHP files to remove the ".php"


Apache is a robust piece of software, capable of more than the vast majority of users ever suspect.


You know, to this day, if someone came to me and said "we want to run a site for five years on an LTS Linux distro release on a fairly weak machine, with auto-updates to system packages but no other maintenance intended. We're not gonna bother with monitoring or whatever. It's mostly static but it'll have a small dynamic component, say 500 LOC worth, maybe. If we ever have to manually touch this thing aside from maybe bouncing it once or so a year we'll consider that a failure. What should we use?" I'd probably say... Apache and PHP.

Sometimes I'm in the middle of messing with the Lovecraftian horror that is the modern JS stack or screwing with extracting what I need from whatever awful request object someone cooked up or any number of other things that aren't accomplishing anything of business value and I wonder whether maybe somewhere we veered way in the wrong direction, as an industry.


nginx can also do it for any FCGI script.


I've seen that Nginx can remove the .php suffix, with its try_files directive. but I haven't see how it can handle PATH_INFO of arbitrary length. Can you point me to the documentation that you're thinking about?


I use a config like https://github.com/ibukanov/bergenrabbit/blob/master/webapp/... , it is pretty standard.


As other commenters point out, most PHP written today goes through a framework that negates most of these points (single entry point, modern architecture, code is precompiles/cached and optimised, simple and intuitive goes through the window)

On the more general point, I am not sure scripting languages lend themselves better to lamdbas, as dependencies are usually external to the system. For instance PHP often relies on system json/xml libraries. Database access also needs to be precompiled and installed on the system, as would be crypto or internalionalization routines.

Compared to that, I'd expect compiled runtimes can have one single "fat" binary that has most of the risky dependencies and only rely on the bare minimum provided by the system.


>Each PHP file is an endpoint. As opposed to having routers in code or client side SPA routing.

The vast majority of PHP frameworks, even micro-frameworks, don't work that way. They route all traffic (except static assets) to an `index.php`, which then forwards it to a regular router.


.. so you can live swap routes and endpoints even with dynamic routing (without performing any hot-reload dance in your own code), which actually sounds like a neat feature at first.


You conveniently leave out all the security mess of that design, especially WordPress. The plugin system is pretty much the cause of all the security issues in WordPress.

Perhaps end-users should not have the capacity to so easily add third party PHP code, even if it’s “simple.”


I really don't see how Wordpress is a valid argument here. We run several business systems in PHP serving hundreds of thousands of users. Last time I used Wordpress was over 10 years ago for my personal blog. Haven't used it for anything else. PHP is great for us, regardless of how Wordpress performs.


I was primarily responding to the OP that used WordPress as an example of how easily you can add plugins (third-party PHP code). I see that as an anti-pattern, because it encourages non-developers to add PHP code into the system, much of it poorly written and insecure (or not performant).

But that plugin system also is one of WordPress' greatest assets. And you can add PHP code to any part of a "theme" too. If you turned off the ability for themes and plugins to be "added live" then I don't think WordPress would be nearly as successful as it has become.


Wordpress is written in PHP.

Wordpress is not PHP.


Did I say that? Please don't twist my words. I was responding to the OP who was talking about the WordPress plugin system, which allows just about anyone to add third-party PHP code into the system. And those plugins have access to everything in the WordPress stack (and the file system for that user).

And I think you can definitely argue that a low barrier to entry means that it gives beginners a lot of power they don't quite understand.

Which is why I think most people steer towards common web frameworks, because not many people can know all the ways you can create security issues.

Then you get into the complexities of various common web frameworks (Zend, etc) and you have to really wonder if the original OPs argument about simplicity and ease-of-use are worth that trade-off from using something else.


Again, those frameworks are written in php. They are not php.

You're trying to apply the problems of a framework/app to a language itself.


> Can extend itself as it runs. For example Wordpress, running off of php files can download plugins to its own server (which are just more php files) to instantly extend itself. Without restarting or redeployment. (What other web platforms can do this?)

Since it's basically just doing eval(loadFile(...)), any language with eval() can do this, for better or worse. Which is like most of them. Perl, Python, Ruby, and Lua, to name the most common.


> Each PHP file is an endpoint. As opposed to having routers in code or client side SPA routing.

> Intuitive, simple, powerful. Can be as easy as editing a php file in notepad and dropping it on a ftp server. Deployed.

This isn't really php specific, with cgi you can do that in any language, even compiled ones like c.

> Amazon lambda may have more in common with PHP in terms of discreet deployable units of functionality. What's old is new again.

Agreed. I must admit I'm guilty of this myself, there is a tendency to start with a framework when there are much simpler and quicker ways to get started, something the world seems to have largely forgotten.


> Can extend itself as it runs. For example Wordpress, running off of php files can download plugins to its own server (which are just more php files) to instantly extend itself. Without restarting or redeployment. (What other web platforms can do this?)

Java: Use a classloader to load a JAR and instantiate one of its classes: https://stackoverflow.com/questions/60764/how-should-i-load-...

.Net - use an AppDomain (or other methods): https://stackoverflow.com/questions/1137781/correct-way-to-l...


> Each PHP file is an endpoint. As opposed to having routers in code or client side SPA routing.

I mean that's just how CGI works, most every HTTP server still supports CGI, nothing stops you from deploying that way. Unless you're using Java or the like of course, then it's not very convenient.


You can absolutely do this in Java if you want to.

I mean, it's insane and you shouldn't, but it's eminently possible.


I'm not saying it's not possible I'm saying it really isn't convenient.


But the reason why it isn't convenient is because the consensus amongst Java programmers is that it's a bad idea. It wouldn't be difficult to build tooling around if one wished to - I think in this case, the fact that almost every language has migrated away from CGI-style routing is probably telling.


> But the reason why it isn't convenient is because the consensus amongst Java programmers is that it's a bad idea. It wouldn't be difficult to build tooling around if one wished to

That's part of the inconvenience though, one of the advantages of CGI was the low tooling requirements.

> I think in this case, the fact that almost every language has migrated away from CGI-style routing is probably telling.

I mean sure, but beyond not being a very good protocol CGI needs to execute the handler script on every connection, Java is not known for that being cheap.


Wordpress can instantly extend itself and its extensions can instantly self-f*ck the whole thing up. (Sorry for the WP hate, I forgot to take my pills)


Deployment of Go services is much easier.

Try to deploy any self-hosted service like nextcloud, gitea. It was a nightmare to deploy nextcloud. Also it turned to be too slow for ONE USER.


Nextcloud is just a bad product :/ it’s hard to pin the blame on PHP.


Still, it took quite long time for me to deploy PHP services. Didn't managed to get seafile(python) working at all. And it was relatively easy to deploy pair of Go services.


Plain PHP files work that way. But what if you use a framework (even an in-house one)? Does this holds true?


Generally, no. Almost any framework has build steps these days, at least `composer install` to get all the dependencies, and usually also special web server configuration, so you don't expose all your PHP files to the web, only the index.php entry point.


Yup :) You very much just `git clone` a Laravel project, point Apache/NGINX (with PHP-FPM configured) to the `public` folder and all you need to do is run database migrations (for the majority of basic deployments).


I inherited a Symfony2 app and it was a bitch to deploy. It definitely had an asset pipeline.

  php app/console cache:clear --env=prod --no-debug
  php app/console assetic:dump
  php app/console cache:warmup --env=prod --no-debug
  chown -R apache:apache . # fix owner
  chmod -R u=rwx,g=rwx,o=rx app/cache  # fix cache perms
  apachectl restart # bounce Apache, otherwise it can throw segfaults


That chown looks super suspect. Usually you wouldn't want the webserver to have write access to the web application it is executing; that's how you get backdoored.


This is my concern with things like the Wordpress auto updater, but it seems the trade off is not having to worry about manual patch management. Security vs convenience as always.


How I've been running multiple wordpress installs for years:

user: $sitename - the 'owner' of the whole hosted dir. rwX

user: $sitename-PHP - the user that php-fastcgi or whatever runs as, r-X permission on the dir, and write permission on the content uploads directory, but CANNOT write to any plugin install dirs, or upgrade php files.

user: nginx - can read all files except the wp-config.php file, which is limited to only the $sitename group reading it.

then use wp-cli to do automatic upgrades every few hours, and a localhost-only ftp server for wordpress to do plugin installs with. When you try to install a plugin, it asks for a ftp username, host as password. You put in the $sitename user, '127.0.0.1' and $sitename password, and you're set. Those login details are never saved anywhere, so the admin has to put them in each time (or their browser stores them).

Works pretty well for me.


The chmod is pretty suspect as well.


its really not... cache is the equivalent to /tmp

check permissions there sometimes.


Unless it is mounted noexec why would you risk setting the exec bits on files that are writable by the webserver?


The exec bit is also important for accessing sub/directories. Although for that you should use X instead of x…


Of course. I actually regard this as a fairly major issue with the Linux permissions model. Far too many people do as the GP and accidentally set all the files to executable. Cleaning up after someone has done this on a nested set of subdirectories is quite tedious.

NB if anyone has ever run: chmod -R u+x . or equivalent and you run chmod -R u=rwX the files will stay executable...

You can always use find to set all files to +x and directories to -x but then what if some of files needed to be executable?


This is sort of a how to not do things:

1. chown -R apache:apache . # Now the webserver will have write access over the whole application!

2. chmod -R u=rwx,g=rwx,o=rx app/cache # Now all files in cache are executable and writable by the webserver!

3. apachectl restart # graceful might be better here.

Securitywise 1 and 2 are not good things to do.


Hmmm Symfony uses routes declared in a file...


> Each PHP file is an endpoint. As opposed to having routers in code or client side SPA routing.

Not if you're using... literally any modern framework and deploying it properly. This hasn't been the case in years except for common legacy stuff like Wordpress.


> For example Wordpress, running off of php files can download plugins to its own server (which are just more php files) to instantly extend itself. Without restarting or redeployment. (What other web platforms can do this?)

I'm quite sure you can achieve this in golang with plugins[1]. That not as flexable as PHP, mainly because it involves a static type system and a compile phase (and more work in general).

1: https://golang.org/pkg/plugin/


Yeah, running plugins is not a difficult problem in most languages; it isn’t widely practiced outside of the PHP community because most folks don’t want to be the reason their users were hacked.


-> What other web platforms can do this? Kinda all Java Webstuff is capable of doing this (Eg. installing Plugins in Atlassian products)


I'm currently developing a facility to do live updating of processes with running game loops in Go. If one were to add in dynamically loadable libraries (working in Linux, last I checked) one could have the same facility for changing a single endpoint in Go. (It could also be implemented with small executables and reverse proxies.)

I think what this demonstrates is that Go isn't a better PHP. Rather, we're at the point where Go could be used to build a better PHP. The most likely paths for this, are to somehow come up with a big group of developers with a penchant for doing this, and/or have some big company fund this.


>Amazon lambda may have more in common with PHP in terms of discreet deployable units of functionality. What's old is new again.

This doesn’t seem to be a language choice problem, it seems to be a monolith problem, where PHP simply managed to avoid some of the traditional monolith issues by design. Your CD pipelines in any micro service architecture should solve these problems, whether it’s using FaaS or containers, or whether it’s using a compiled language or an interpreted one.


>> PHP simply managed to avoid some of the traditional monolith issues by design

No - it didn't. PHP up to this day is a spaghetti mess it had always been.


I really don't see what makes php a spaghetti mess. Spaghetti code is a skill issue, not a language issue


No, I am speaking about the ecosystem. Large, industry-grade projects developed in this language.

What standards did Wordpress set a few years ago? Huge, disjoint sets of files spread throughout the themes folders. How much time does it take me to understand which of these files is causing the error?

Do you still upload your code via FTP directly to the web server? Do you still reset the code cache for the PHP in order for that new code to become live? Well, I am going to be happy when your skill set is made redundant, because you are getting paid to know all of these quirks and they are not making the industry any better.

You can write good code in _anything_. You could even write it on paper and then expense that to a person working for $0.01/hour to execute it. The question is whether the liability of your methods is lower than the liability of other methods.

PHP is essentially the dinosaur from the times when SEO consultants made loads of money. They are still using PHP because they never knew anything better and the scale of the industry is enormous, so it's slow to change.


> Do you still upload your code via FTP directly to the web server?

It's pretty rare that this is even an option, since no one in their right mind would set up an FTP server with access to anything important.

SFTP? I don't see why not. No need to make things more complex than is necessary.

I don't remember any particular problems with code caching when I did PHP development, though maybe you use a different caching tool than I did. Also, Wordpress is its own beast, and is generally reviled by most everyone, expecially PHP programmers.


>> SFTP? I don't see why not. No need to make things more complex than is necessary.

You are trying to argue definitions (SFTP vs FTP when FTP just means “any file transfer protocol”) instead of arguing that uploading any amount of files by any protocol that are immediately picked up by the interpreter introduces nondeterministic behaviour of the website where the users accessing your website only see a subset of the files you were trying to upload.

Doesn’t that just reinforce one’s confidence in the fact that PHP developers are generally quite amature in nature?


Who says you have to upload your code directly to the deployment directory? Upload files, switch symlink. Done. You are talking about a deployment technique that has nothing to do with PHP.


Nah, I am speaking about the general/majority culture of how it is done in PHP.

Do you not see the comments in this specific thread? The guy a bit higher up the chain thinks my issue is with "FTP vs SFTP", rather than "the version of the files at a particular moment in time".

He doesn't even understand why a partial view of the files could be an issue.

And the general sentiment in PHP is that "you can change any file separately, by editing in online on the server - and that is a good thing".


Being able to extend or replace modules easily (and especially online) is something that's challenging in monoliths. The design of PHP avoids this issue. I'm not really making any other statements about the quality of the language here.


The PHP workflow is definitely great for a lot of uses cases, but the language itself is horrible.


> Each PHP file is an endpoint. As opposed to having routers in code or client side SPA routing.

Except in reality once you want to use non /whatever/file.php as urls - you're back to implementing a router

> PHP files can be deployed independently, swapped out or updated live. Intuitive, simple, powerful. Can be as easy as editing a php file in notepad and dropping it on a ftp server. Deployed.

Except in reality, you don't want to deploy directly after editing a file since it's like deploying to production, you're gonna have a bad time.

> A single layer as opposed to 'modern architecture' where there's client side back/front end layers, api layer, logic, validator, data access, and ORM layers.

Except in reality, that just leads to a bunch of spaghetti code with no separation of concerns and impossible to maintain.

> Can extend itself as it runs. For example Wordpress, running off of php files can download plugins to its own server (which are just more php files) to instantly extend itself. Without restarting or redeployment. (What other web platforms can do this?)

Except in reality, this only works with php apps that have an integrated package manager and even then it leads to problems when a version is incompatible both in php or plugin dependency.

> Amazon lambda may have more in common with PHP in terms of discreet deployable units of functionality. What's old is new again.

Except in reality, serverless lambda is a terrible idea:

1. lock in to a specific platform with limited visibility tools and a dependency on a 3rd party when shit doesn't work

2. when functions change you run into function incompatibility chaos and no smart way to handle that mess

3. the "scalability" win is a lie, it's just deferred to whatever data storage service you are using - which ends up blowing up because while serverless scales, the db doesn't

4. addendum to 3, the statelessness of lambda means you can not use local caches to increase your throughput - increasing your hardware requirement

> Compiling/deploying an entire system to change a single endpoint feels backwards after using PHP.

Except in reality, you are notified of a bunch of potential bugs right there and then, instead of "maybe" finding out about them at runtime especially since PHP will by default consider a non defined variable just an empty string and display it as such on the page, and it's not until Joe User notices it that you'd ever find out.


1. lock in to a specific platform with limited visibility tools and a dependency on a 3rd party when shit doesn't work

The logging analysis tools are excellent and it’s quite easy to tie your lambda logging to whatever tool you are using.

As far as “lock-in”, there is an Amazon provided framework in every supported language to just add a proxy that allows you to use your standard framework (Node/Express C#/ASP.Net, etc) and you cab deploy to Lambda or your traditional web server with no code changes.

2. when functions change you run into function incompatibility chaos and no smart way to handle that mess

What???

3. the "scalability" win is a lie, it's just deferred to whatever data storage service you are using - which ends up blowing up because while serverless scales, the db doesn't

Serverless Aurora (Mysql) and DynamoDB both have autoscaling. You can also auto scale Read Replicas fast with traditional Aurora.

4. addendum to 3, the statelessness of lambda means you can not use local caches to increase your throughput - increasing your hardware requirement

Again not true, if you are calling an endpoint frequently, the instance stays warm for up to around 15 minutes and can maintain state.


Regarding (4) Lambda and friends are not actually stateless between requests; the process is initialized once and then potentially reused many times for hours, so you can cache between requests, both in memory or in the filesystem (AWS Lambda provides 512MB in /tmp).




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: