Meta is doubling down on its commitment to the Metaverse by, voicing its intention to allocate 20% of its expenses to the Metaverse in a move that is almost certain to hit earnings hard in 2023.
Meta’s CTO Andrew (Boz) Bosworth reiterated Meta’s commitment to the Metaverse in a blog post where he stated that the company was comfortable with this level of spending for the time being.
I have recently estimated that Meta’s expenses in 2023 will be around $97.3bn, meaning that spending on the Metaverse is expected to be $19.2bn.
This is a vast amount of money to spend on a technology that might take off at the end of the decade, leading me to think that this is also an insurance policy.
Apple’s advertising policies have incensed Meta and cemented its determination not to be dependent on the platforms of others which is why it is determined to be a vertical player in the Metaverse. This may work out well in the next decade or so, but in 2023, this is going to have bad consequences.
If 2023 revenues are flat at $116bn (they might decline), EBIT will be $18.7bn, which after-tax translates into $15bn of net income or $5.58 per share. This puts the company on 20.4x 2023 PER, which is very expensive for a company where earnings have fallen precipitously with no recovery in sight.
Furthermore, governance remains poor, with Mr Zuckerberg alone in the driving seat, meaning that a heavy discount is warranted to fair value when looking for an entry point into this stock. Hence, I don’t think the worst is over for this company, and with its short to medium-term outlook, $70 per share is where I would begin to get interested.
There are far better options to look at where there is growth to be had at a much lower multiple
Artificial Intelligence – DALL-E
After having some fun with ChatGPT, I turned my attention to DALL-E to test its ability to draw outside of the box, where it utterly failed to perform. My daily newsletter is accompanied by a topical image that is often difficult to match to the theme of the topic under discussion.
Consequently, DALL-E, as advertised, is a fantastic use case as there is the potential to create a picture that fits the theme of the day with very little effort on a daily basis. However, reality brought this hope crashing to earth as DALL-E, in my experience, is useless at creating anything that it has not explicitly been taught.
For a recent newsletter, I wanted a picture of a Tesla racing down the slope of the share price but what DALL-E produced was laughable. For another, I wanted a picture of a VW Beetle dancing backwards, but again, I received gibberish.
I tried again and asked for a robot playing Risk but got human and robotic hands playing chequers.
I did manage to get Bitcoins in a freezer, but nothing as good as I can find online with no effort.
I also asked for a Google robot driving a Renault car through the streets of Paris and received a very early Waymo vehicle with an unintelligible logo roughly superimposed on a blurred Paris street.
Consequently, I can only conclude that DALL-E for any practical purpose is useless and, when put to the test, massively underperforms the high expectations that have been set for it. I suspect that it has been trained to produce weird and whacky abstract art and when it is asked for something that combines real-world subjects in unexpected situations (as above) it breaks.
This is yet another example of how these AI models remain unable to deal with generalisation and, at their heart, are nothing more than pattern recognition and matching systems. I continue to think that real hope for progress lies in the combination of software and deep learning, which is a field referred to as neurosymbolic AI.
Meta Platforms made some recent progress in this field, and this is where I am looking for a solution to some of the trickier problems, such as autonomous driving.