I've had luck with telling it to just rewrite a certain part of the code. You can copy and paste the relevant part and ask it to just rewrite that or you can direct it using natural language, you just have to be specific. If the chat is getting too long you can also copy just the important parts or the code so far into a new chat and start from there.
I've found that copying the result of one session to the next can work pretty well (like when you hit token* limits), especially if you have the bot include comments.
Yes exactly. My company has asked AWS if they will be adding support for pgvector for rds but they haven't been able to confirm if that will happen any time soon.
If the vectors are in the same database as the tabular/structured data then text to sql applications of llm's are so much more powerful. The generative models will then be able to form complex queries to find similarity as well as perform aggregation, filtering and joining across datasets. To do this today with a separate dedicated vector db is quite painful.
You could write a FDW that reads/writes to a vector database using postgres id tagged vectors. You can write to it from postgres, reference it in queries, join on it, etc. That cuts out a lot of the pain from having separate databases, the only remaining issues are additional maintenance overhead and hidden performance cliffs.
The hack to solve this is to embed each paragraph in your large corpus. Find paragraphs most similar to the user query using embeddings. Put the paragraphs and the raw user query into a prompt template. Send the final generated prompt to gpt3.
A "real" demo video was exactly what I was looking for as well. Since the concept is so novel, seeing the product in action would increase my trust, and willingness to buy significantly. My skeptical side made me wonder if demos with real children had been left out on purpose so it doesn't make the product look bad - probably not what you were going for.
It's useful as a study guide and reference even for someone who ostensibly learned all this stuff in school. It's a tremendously good book, and it's even more impressive that it's free to read online in a high-quality HTML document.
Create features for day of week, day of year, month of year, lagged values of y, lagged values of y for each period (eg: 1, 2, 3 weeks and years ago etc). You then predict forward 1 time step at a time.
Yeah. The clickbait title suggests that big data tools themselves are going away; the actual contents are that the needless hype for their companies, folded into their sales pitches, is going away.
This looks really cool! I really like the free form chart builder. I work at Simply Wall St and we also make financial data easy to use and understand. We’ve realised that a bigger portion of our effort should be spent on educating individual investors so it looks like we’ve come to the same conclusion as both of you.
If anyone is keen to checkout Simply Wall St (https://www.simplywall.st), we have a 7-day free trial and a free plan for ongoing light usage.
I was initially excited about this as I think it might solve our Redshift pain points and potentially avoid us having to deal with a migration to Snowflake but then I remembered when AWS account managers promised Athena and Spectrum would solve these same problems at a previous company I worked for a few years ago. I'm assuming the developer experience will still be terrible with lots of knobs to tune to actually get any decent cost/performance.
I hold a very dim view of all information AWS emits, by whatever channels.
My experience with support/account managers is that they always tell you "yes, Redshift can do this", and the and the only way to actually get a "no" out of them is to already know Redshift cannot do something, and to explain to them why.
They won't deny reality, but you would never have got that answer from them in any other way.
I suspect the problem is the training AWS give its staff. The material they are taught is relentlessly positive and I suspect AWS staff actually have no idea what Redshift is no good for.
(Indeed, if you read the official docs for RS, which I strongly advise you never to do, you will come out the other end under the impression there is literally nothing Redshift cannot do; the docs describe everything using positive terms only.)
It would be awesome if gpt4 could be made to edit the code and therefore, it didn’t have to regenerate it from scratch every time.
The regeneration also chews up a lot of the token limit, so it forgets crucial parts earlier on in the conversation.