Sol-Ark’s markup is like 5x the list price just for the official rebadged version. Sol-Arks (“US veteran owned company”) still have the firmware made in China, and are susceptible to Chinese hackers, and had to be bought through a distributor. So naturally people went with off-listed Deye inverters because of the scheningans from Sol-Ark.
Now, people are without power and they have to go to Sol-Ark to get power restored, likely by paying through the nose.
That's one way to frame it. Another is Sol-ark incurs costs of developing, marketing and supporting their official devices and the contract manufacturer is able to sell their own version in the Chinese market. Greedy people who don't want to pay Sol-ark for all the costs they incurred bought grey market devices that Sol-ark has repeatedly warned are in contract violation in this market. The manufacturer, not Sol-ark, has now bricked those devices, and people are blaming Sol-ark anyway because they want to continue to justify their actions.
If the people are buying directly from manufacturer, why should any costs that Sol-ark has incurred be their concern? They aren't using the official devices, so they aren't enjoying any advantages of that, either.
So, companies like the free market when it suits them, but want regional monopolies (without providing any value) when it benefits the consumer. Interesting.
It does make one wonder why these exclusivity agreements exist.
If Sol-Ark is adding value and competitive differentiation, wouldn't that justify the price premium over the basic Deye product? Especially if Deye is not willing to offer its own support/warranty to customers?
Why does Sol-Ark need to create a more monopolistic landscape? Not being judgemental, genuinely curious. (Well, I know why Sol-Ark wants it. I guess the question is why we allow it).
Because those costs were incurred with the plan to recoup the cost from sales in the US, and (presumably) those people are bypassing the licensed sale/use; which ruins that plan.
Your question is really no different than asking why it's not legal for me photocopy books and ignore copyright.
The problem is they already took the money and basically broke it after the fact. Typically there’s all sorts of legal protections protecting against something like that.
Why should we as a society enable plans and business models that hinge on taking away consumer freedom to get the product from the most competitive supplier instead of the one who wants to milk an artificial monopoly?
It was my understanding that the company they bought it from didn't have the rights to sell it in the US. As such, there's no real difference between buying from them and buying from someone that stole it and sold it to you.
Now, you can argue that country-specific licenses shouldn't be allowed; but they currently are.
I think most people can see the obvious ethical difference between actually stealing something vs breaking an exploitative license like that, and react accordingly.
Unfortunately, it is an accurate and necessary term. Because while you might think that you are free to buy and resell anything you want without problem, the courts have made the issue much more grey than black and white. see the Omega v Costco lawsuit for an example.
My experience with this class of Chinese manufactured inverters are that they all use TI TMS320F28xxx series DSPs and usually without any protection fuses burnt. If you look hard enough you should also be able to find unencrypted firmware and flash it with the standard TI tooling.
Depends on your definition of "free", but, yeah. For non-commercial, non-paid use, both VS and Rider are good options, but this submission still breaks site guidelines and is a dupe...
I would assume that as an individual (rather than a company), it's only after you actually start making money from the project that it becomes commercial
Azure/AWS provide much more base services (multiple regions/AZs, DynamoDB, S3, SQS, etc) that are pennies to operate and aren't really targeting the cheap low end that Hetzner is.
Blazor Server in an enterprise (intranet) is extremely fast because the network is extremely fast.
Across the Internet, it's still quite fast if you use Azure SignalR service, which is effectively a globally distributed CDN. Most commercial apps use this service.
Blazor WASM is better than I thought it would be. My company has built a healthcare SaaS app that has ~5k daily users on it and no one has complained about the initial rendering and download time to pull down 25MB on the initial load. This sounds like a lot but with modern broadband, 5G, and CDNs, this takes about a second or two.
I think that’s exactly the wrong impression I get. Private equity is going to cause them to quickly get to better profitability through price increases, layoffs, and reduced R&D.
I’d like to support this, truly I do—I’m a .net fan.
But I read the docs. Sisk is supposed to be simple. But the code samples are nearly the same as ASP.NET minimal APIs. Can you clarify why Sisk is better than out of the box .NET?
.NET deserves a good, separate non-ASP.NET Core+Kestrel web server.
The reason for this is the first-party solution has to be both fast and support a lot of different features, and be compatible with all the ways to process request input/output, parameter bindings, rich telemetry (particularly OTLP) integration and so on and so forth.
Which is why a lightweight UTF-8-based zero-copy pipeline (via span/memory) that tries to reduce call stack depth and context switching, and that moves as much of the feature composition to compilation time as possible could be indispensable.
Such server could be built on top of either raw `Socket` and `SslStream` (of which a toy attempt, really, can be thrown together in under an hour) and its async engine or via a custom one - we have all the tools to build an io-uring network stack with async integration.
.NET's compiler is way better than any other GC-based platform except OpenJDK/Graal but JVM has few features to optimize this further and bridge the gap with C++ and Rust based applications, unlike .NET.
There is a lot of raw runtime performance left on the table and an alternate implementation that gets back to the top of the chart on Techempower would be a welcome change :)
Last time i looked Kestrel already uses most of the techniques above ( sans an IOUring backend for Socket ) Almost all allocations are pooled, and zero copy as well.
Header parsing is even done with System.Runtime.Intrinsics using SIMD where possible.
The higher level ASP.NET Core stack is also quite efficient and optimized.
BUT: as soon as you gove above the basic middleware pipeline its tends to get bloated and slow. ASP.NET COre MVC is particulary bad.
System.Text.Json is also quite nice, and often is allocation free.
We bascially just us the middleware pipeline and nothing else, and can get millions of requests per second on basic hardware.
This is absolutely true, there is a lot of performance work invested at all layers or features that benefit from improvements in runtime itself.
As you noted, the problems happen later in the handling pipeline. There are also choices that ASP.NET Core has to make as an extremely general-purpose web framework like that.
System.Text.Json is pretty good for a default serializer, but it's far from how fast json serialization can be done in .NET too.
Both of these end up reallocating and transcoding data, and STJ also might take a hit if deserialized classes/structs have converters in them as it can disable a fast-path.
My idea here is that a new implementation can benefit from the hindsight of strength/weaknesses of asp.net core and how its design influences the implementation choices of the user (and whether they end up being optimal or not).
It would also not be constrained by backwards compatibility or certain organizational restrictions - for example you do not need to explicitly use out-of-box QuicConnection and QuicStream that rely on mquic and opt to statically link to parts of the stack implemented in Rust, or bring over more logic to C# instead. There is a certain conventional way you are expected to approach this in dotnet/* repositories, and it might be a bit restrictive in achieving end goals of such a design.
It would be able to approach the problem as a library that expects a more advanced user, closer to how e.g. Axum or back in the day actix-web did (and by advanced I mean effective usage of (ref) structs and generics, not that it would need boilerplate).
p.s.: does this organization with millions of RPS have a name?
This project might have helped me when I needed to implement a console app that might or not start a web server.
Asp.net is very overbearing (even using minimal APIs) when you want to use other Microsoft utilities like DI, logging or config since it wants to be the main entry of the application.
Never found an easy way to use the host feature with a optional web application where they both shared the DI. Note that this is more a problem with the generic host than asp.net itself.
It is actually possible, to seperate those things, but it's tricky.
Our current product can run in several modes, one with a web ui and api and one without. If running without there is no trace of the ASP.NET Core Pipeline ( and Kestrel is also not running )
We're using ASP.NET Core Minimal APIS for both API and UI (if configured to run in that mode )
If I understand the problem, just move all your DI registrations to a shared extension method:
public static ConfigurationExtensions{
public static AddMyApp(this IServiceCollection services){
services.AddScoped<IFooService,FooService>();
}
}
//In console app:
var consoleBuilder = Host.CreateApplicationBuilder(args);
consoleBuilder.Services.AddMyApp();
...
//pseudocode - in real world you'd put this in another class or method called by the commandline code:
if(webHostCommandLineSwitch){
var webBuilder = WebApplication.CreateBuilder(args);
webBuilder.Services.AddMyApp();
...
}
It seems every major job posting platform gets overran with spam and eventually candidates move to a new platform every ~5 years.
First it was CareerBuilder, then Monster, then Dice, and now Indeed is having its day in the sun too. I wonder if the emergence of GenAI in the past 2 years have accelerated the spamification of Indeed and thus its demise. I’ve posted there and within minutes get hundreds of AI-generated “candidates” with practically the same resume/CV.