You know what, I’ve been thinking of selling some shares of my space bag which includes common names here to take some profit after a couple folds increase. I feel so tempted to sell not to mention your case.
Not a 1% return though. $8 premium on a $23.50 cost basis is 34% return immediately. Not accounting for the 32% gain should the shares get called away. You very much reduce your downside with covered calls on big movers due to the inflated IV. I have a few positions where my effective cost basis is $0 from selling premiums.
Shhh don't engage a troll that's missing out on gains and picking a fight with everyone loving RKLB.
Instead, buy more calls so when you have money, you can fuck his girlfriend for the price of half a RKLB stock.
Torvesta probably, good for him. Osrs content creation has to be stressful knowing that your income depends on a (niche) game not dying. It’s still going strong but in their shoes I’d be terrified about my situation in 10+ years.
Hey I have a data science background.
This is good and entertaining fluff, but if you know what you're looking at, there are important things missing and some things that look good but don't make sense.
I give him 4.8/5 for making it look important.
For example: He has two time series (S&P500 and the bonds) and he compares them to many possible offsets. This is called cross-correlation analysis. It's a real thing, but it's also notorious for overfitting the data and showing spurious relationships if you misuse it like OP dies here. When you test many different offsets, you increase the probability of finding a high correlation *somewhere*, purely out of random chance. This is kind of like flipping a coin and getting heads 10 times in a row; it's impressive if you only flipped the coin 10 times, but much less exciting if you flipped it 10 million times. You were bound to get a 10-head steak at some point. An overfit predictor is one that performs very well on the historical data used to find it, but poorly on new, unseen data. If you select the single best lag based purely on the highest R-value from your historical test (precisely what OP did here), you risk overfitting to random noise that exists in the sample, but isn't truly predictive. And that's almost surely been done here and the validation should have been on showing that the model isn't overfit.
To validate a model like that you wouldn't back-test (what OP does). Some things you could do are split the data into in and out of sample (e.g. make the model based on only the first X days in the series, and then judge it based on its ability to predict the data after day X). You should/could take steps to remove seasonality or trends within the time series first (which we already damn well know the stock market is seasonal, so him using untransformed values is most definitely increasing his calculated correlation). It would also be good to do bootstrapping to check statistical significance, instead of just p value.
But it is very entertaining. OP probably also has a data background, to be knowing what to do to specifically torture the data this way.
> Besides, wtf do a bunch of children know about market sentiment? It ain't stock brokers and investors playing RuneScape. It's like seeing patterns in clouds and concluding they were formed with intent. A bunch of nonsense.
It's not an impossible correlation - one could argue that maybe when consumers have more disposable money they are willing to drop more money on online games. It's just you'd likely find better correlations.