Please see my reply to "someone else" you mentioned. If there is anything else you think I might have mistaken to employ during benchmarks, I am all ears.
We store the real-time content stream in a separate bulk storage unit (e.g., BigQuery) with a certain retention window, but the ETL'ed documents are always on ES. Given a plain event (i.e., not ETL'ed document) is not much of a value for search, I would not call the stream storage as the primary storage. It just assists us to re-build the ETL state in case of an emergency.
It was indeed an incorrect snippet -- removed it. We do have a group of PostgreSQL experts within the company and we let them tune the database for the benchmark. Let me remind, this was not a tune once, run once operation. We spent close to a month to make sure that we are not missing anything obvious for each storage engine. But as I explained, nothing much worked in case of PostgreSQL.
Either way, something still seems off. If MongoDB and Elasticsearch were 3-5x more performant I would still find that surprising. A > 10-20x difference really seems like a configuration or implementation issue.
The "refresh=wait_for" [1] index setting does guarantee that a subsequent read will get the data. It causes all shards to refresh:
Refresh the relevant primary and replica shards
(not the whole index) immediately after the operation
occurs, so that the updated document appears in search
results immediately
There's also the "wait_for_active_shards=<n>" setting, which merely asks to wait until n shards have written the changes.
It guarantees that all replications will report state in sync with each other on search, not that the last reported state is the actual current state of the index.
As the numerous comments here and the documentation states, the refresh flag on your insert/update will ensure that changed data in that request is consistent for queries after.
Where did you get the behavior you described? Are you sure you're not confusing this for the separate refresh command itself? That is not attached to any particular insert/update.
Direct from ES documentation on the `refresh` flag:
"Refresh the relevant primary and replica shards (not the whole index) immediately after the operation occurs, so that the updated document appears in search results immediately."
Do you have other information about the refresh flag because their documentation clearly states that forcing a refresh is applied to primary and replica shards meaning that it will be available for query directly after the call to refresh is made.
My experience was that this was not reliable. It was nearly always true, but not always, and tests would sporadically fail some small percentage of the time.
However, this was back in ~2015 and Elasticsearch 1.3 or something like this, which is of course a now-ancient version. Perhaps things are different now.
edit: Perhaps we were using the refresh command and not the refresh flag. It was a few years ago and I don't have access to the code any more, and my memory may be failing here. If the refresh flag works as advertised (enforces an index update and guarantees a consistent view of the data for the next query, which the command did not seem to) then that of course solves my initial problem W.R.T. writing tests.
Yes it does. I'm not sure what you think the refresh operation is if not that.
I've run countless integration tests with ES and never seen something fail due to refresh not working as advertised. If you have, what version of ES was it? Can you give some sample code that sporadically exhibits the problem?
We generally use index refresh in ITs (running ES in Docker) and it fails occasionally, which I believe the case described here: "The (near) real-time capabilities depend on the index engine used." https://www.elastic.co/guide/en/elasticsearch/reference/curr...
That seems to be the issue then. The refresh flag should be passed in your insert/update/delete operations.
The refresh command can also be called (which is what you're doing) but this is a different operation and just triggers the index build with no guarantees that it finishes or is consistent with any particular data mutation.
Did you read the previously posted documentation for the refresh flag?
Hey Debarsh! First, thanks for taking time to read such a lengthy post.
If I am not mistaken the majority of the PL/SQL glue is owned by Gert, though you might recall better. Quite some VCS history was lost while migrating from SVN to Git. ;-)
The reason we are "replicating" the entire data is to 1) determine the affected products and 2) re-execute the relevant configurations (facets, synonyms, etc.) while making retroactive changes. (For instance, say someone has changed the PL/SQL of "leeftijd" facet.) Here, the storage is required to allow querying on every field, for (1), and on id, for (2). While id-based bulk querying is (almost) supported by every ETL source, querying on every field is not. Hence, we "replicate" the sources on our side to suffice these needs. Actually, the entire point of the post was to explain this problem, but apparently it was not clear enough.
For your remarks on event sourcing and BI, I am a little bit puzzled. I will need some elaboration on these remarks. We do have event sourcing on our side (that is how we can replay in case of need) and BI is not really interested in ETL data. Maybe I misunderstood you?
I am also confused by how you relate scheduling/running PL/SQL jobs via Hadoop, Spark, Flink, etc. Did you see the link to Redwood Explorer I shared in the post?
Bol has plenty of other ETL pipelines for BI. What I meant is the data cooked for search is not (much) of interest to BI, yet. Though we do have other means to feed BI for search-relevant content.
To all fairness, you are right about oracle stuff ingrained in bol.com, however, I am not sure if I should go in detail, but the whole thing used to be like - Maintain event states with "versions" table and then run hadoop, spark jobs on them, and snapshot the latest computed state to oracle so that they could run BI on it.
But I understand now what you actually mean. I wouldn't call it ETL, as ETL is more about prepping the data for BI and not cooking data for search.
yea, I remember they used to have redwood for scheduling PL/SQL queries but I think majority of ETL jobs for BI were in hadoop/spark/flink.
Having said all these, I think it is quite some neat and cool engineering work, I hope you guys are successful implementing the solution.
Isn't ETL an intermediary in BI? I think I am a bit confused, to give some context, this is my understanding, you have all the services generating data, you have ETL jobs, that extract data from these services, transform and move the data to a star or snowflake schema in RDBMS prepared for BI tools for query efficiently.
That is indeed the case and we are also bitten by that. The most effective work around we managed to find is to flush periodically (say every 300ms) for a certain timeout period before reading from ES again for checks. Though even then, ITs still fail time to time.