In the early hours of WWDC (Apple Worldwide Developers Conference) on June 6th, it was also the fifth day after I found out that I was suffering from COVID-19. Postponed again?
So when Cook appeared at two o'clock in the morning, he waved "One More Thing", and my friends and I cheered together on this side of the screen:
Macintosh introduced personal computing, iPhone introduced portable computing, and Apple Vision Pro is going to introduce Spacial Computing
Macintosh computer started the era of personal computer, iPhone started the era of mobile Internet, and Apple Vision Pro will start the era of spatial computing.
As a cutting-edge tech enthusiast, I cheer for the new toys I can own next year, but as a Web 3 investor focused on games, the metaverse, and AI, this is a sign of a new era that makes me shudder.
You may wonder, "What does the upgrade of MR hardware have to do with Web 3?" Well, let's start with Mint Ventures' Thesis on the metaverse track.
Our metaverse, or the Thesis of the Web 3 world
The asset premium in the blockchain world comes from:
The bottom layer of trusted transactions brings about the reduction of transaction costs: the asset right confirmation and ownership protection of physical goods are based on the compulsory confirmation of rights by the state machine violence agency, while the asset right confirmation in the virtual world is based on the "consensus of data "Trust that cannot (or should not) be tampered with", and the recognition of the asset itself after the title is confirmed. Although you can right-click to copy and paste, BAYC still has the price of a house in the 18th-tier cities. This is not because the copied and pasted pictures are really different from the pictures of NFT metadata, but on the premise that the market has a consensus on "non-copyability". There is a possibility of securitization.
A high degree of securitization of assets brings a liquidity premium
The permissionless transaction corresponding to the decentralized consensus mechanism brings "permissionless premium"
Goods in the virtual world are easier to securitize than physical goods:
From the history of popularization of digital asset payment, it can be seen that people’s habit of paying for virtual content did not develop overnight, but it is undeniable that payment for virtual assets has penetrated into the lives of the public. In April 2003, the advent of the iTunes Store made people discover that, in addition to downloading songs into the Walkman on the Internet where piracy is rampant, there is also an option to purchase genuine digital music to support favorite creators; in 2008, the App Store came out, a one-time purchase It became popular all over the world, and the subsequent in-app purchase function continued to contribute to Apple's digital asset revenue.
There is also a grass snake gray line buried in the game industry's payment model changes. The original version of the game industry was Arcade Game. The payment model in the arcade era was "pay for experience" (similar to movies), and the payment model in the console era was "pay for cassettes/discs" (similar to movies and music albums). The sale of purely digital games began. At the same time, Steam’s digital game market and in-game purchases that allowed some games to achieve revenue myths appeared. The history of game payment model updates is also a history of diminishing distribution costs, from arcade machines to consoles, to game digital distribution platforms that everyone can log on to with personal computers and mobile phones, and the game itself that players are already immersed in, the size of the game itself. The trend is that the cost of technical distribution is getting lower and lower, and the audience is getting wider and wider; while game assets have changed from "a part of the experience" to "purchasable commodities". (Although the small trend in the past decade has turned into an increase in the cost of digital asset distribution year by year, this is mainly due to the low growth of the Internet, high competition, and the monopoly of traffic entrances on attention.)
So, what's next? Tradable virtual world assets will be a theme we're always bullish on.
With the improvement of virtual world experience, people's immersion time in the virtual world will become longer and longer, which will bring about a shift in attention. The shift of attention will also bring about a shift in valuation premium from strong attachment to entities to virtual assets. The release of Apple Vision Pro will completely change the experience of humans interacting with the virtual world, thereby bringing about an increase in the immersion time of the virtual world and a substantial improvement in the immersion experience.
Source: @FEhrsam
*Note: This is our variant definition of the pricing strategy. In the premium pricing strategy, the brand sets the price in a price range that is much higher than the cost, and fills the brand story and experience in the difference between the price and the cost. In addition, cost pricing, competitive pricing, supply and demand relationship, etc. are also factors that will be considered when pricing commodities, and only premium pricing will be expanded here. *
MR industry history and present
The exploration of XR (Extended Reality, including VR and AR) in modern society began more than ten years ago:
In 2010, Magic Leap was formed. In 2015, Magic Leap’s amazing ad for the whale jumping in the stadium caused a sensation in the entire technology world, but when the product was officially launched in 2018, it was booed because of the extremely poor product experience. The company raised $500 million in 2021 at a post-money valuation of $2.5 billion, making the company worth 30 percent less than the sum of its financings — $3.5 billion. In January 2022, it was reported that the Saudi Arabian sovereign wealth fund had gained majority control through a $450 million equity and debt deal, and the company's actual valuation fell to less than $1 billion.
In 2010, Microsoft started developing Hololens, released the first AR device in 2016, and released the second in 2019. The price is $3,000, but the actual experience is not good.
In 2011, the prototype of Google Glass was released. The first product was launched in 2013. It was once a smash hit and was given high expectations. However, due to camera privacy issues and poor product experience, it ended dismal, and the total sales volume was only a few million units. The enterprise version was released in 2019, and the new beta version was field tested in 2022, and the response was mediocre. In 2014, Google's Carboard VR development platform and SDK came out. In 2016, Daydream VR came out, which is currently the most widely used VR platform for Android.
In 2011, Sony PlayStation began to develop its VR platform. In 2016, PSVR made its debut. Although due to the trust in PlayStation, users bought more enthusiastically when it was first released, but the follow-up response was not good.
In 2012, Oculus was founded and was acquired by Facebook in 2014. Oculus Rift was launched in 2016, and a total of 4 models have been launched successively. The main focus is portability and lower pricing. It is a device with a relatively high market share in the market.
In 2014, Snap acquired Vergence Labs, a company founded in 2011 to focus on AR glasses, which became the prototype of Snap Spectacles. It was first released in 2016, and 3 updated devices have been launched successively. Like most of the products mentioned above, Snap Spectacles attracted a lot of attention at first, people lined up in front of the store, but few follow-up users, Snap closed the company's hardware division in 2022, and refocused on smartphone-based products. AR.
Around 2017, Amazon started developing Alexa-based AR glasses, the first Echo Frames were released in 2019, and the second version was released in 2021.
When we look back at the history of XR, we can see that the expansion and cultivation of this industry is far beyond the expectations of everyone in the market, whether it is a technology giant with a lot of money and a lot of scientists, or a smart and capable start-up company that has raised hundreds of millions of dollars and focuses on XR . Since the release of the consumer VR product Oculus Rift in 2016, all VR brands, such as Samsung’s Gear, Byte’s Pico, Valve’s Index, Sony’s Playstation VR, and HTC’s Vive, have shipped less than 45 million units. Since the most widely used VR devices are still games, AR devices that people are willing to use occasionally did not appear before the release of Vision Pro. According to the data of SteamVR, it can be roughly inferred that the monthly active users of VR devices may only have a small millions.
Why are XR devices not gaining popularity? The failure experience of countless start-up companies and the summary of investment institutions can give some answers:
1. The hardware is not Ready
Visually, VR devices have a wider viewing angle and are closer to the eyeballs. Even with the most advanced devices, the pixels on the screen are still difficult to ignore. 4K for one eye, or 8K for both eyes, is required for full immersion. In addition, the refresh rate is also a core element in maintaining the visual experience. It is generally believed in the market that in order to achieve the anti-dizziness effect, XR devices need 120 HZ or even 240 HZ per second to maintain a similar experience to the real world. And the refresh rate, under the same computing power, is an element that needs to be balanced with the rendering level: Fortnite supports 4k resolution at a refresh rate of 60 HZ, but only supports 1440p resolution at a refresh rate of 120 HZ .
Because compared with visual intuition, hearing seems to be worthless in a short time, and most VR devices have not worked hard on this detail. But imagine that in a space, no matter whether it is a person on the left or the right, the voice of speaking is transmitted from the top of the head steadily, which will greatly reduce the sense of immersion. And when the digital Avatar in the AR space is fixed in the living room, when the player walks from the bedroom to the living room, the volume of the Avatar's speech is the same, which will subtly reduce the realism of the space.
In terms of interaction, traditional VR devices are equipped with control handles, and for example, HTC Vive needs to install cameras at home to confirm the player's movement status. Although Quest Pro has eye tracking, it has high latency and average sensitivity. It is mainly used for local rendering enhancement, and the actual interactive operation is still dominated by handles. At the same time, Oculus also installed 4-12 cameras on the head display to confirm the state of the scene where the user is in, to achieve a certain degree of gesture interaction experience (for example, in the VR world, use the left hand to pick up a virtual phone, and the right index finger is empty. Click OK to start the game).
In terms of weight, the quality of the equipment that makes the human body feel comfortable should be between 400-700 g (although compared to normal glasses of about 20 g, this is still a huge monster). But in order to achieve the above-mentioned clarity, refresh rate, level of interaction, computing power (chip performance, size and quantity) matching its rendering requirements, and hours of basic battery life requirements, the weight of the XR device is a difficult trade-off process.
To sum up, if XR is to become a next-generation mobile phone and become a new generation of mass hardware, a device with a resolution above 8k and a refresh rate greater than 120 HZ is required to avoid dizziness for users. This device should have more than a dozen cameras, a battery life of 4 hours or more (only need to be removed during lunch/dinner break), no or little heat generation, weight less than 500 g, and price as low as 500-1000 US dollars. Although the current technical strength has improved a lot compared to the last wave of XR boom in 15-19 years, it is still difficult to meet the above standards.
But even so, if users start to experience the existing MR (VR + AR) equipment, they will find that although the current experience is not perfect, it is also an immersive experience that 2D screens cannot match. But there is still considerable room for improvement in this experience-taking Oculus Quest 2 as an example, most of the VR videos that can be watched are 1440p, which does not even reach the resolution limit of Quest 2 4K, and the refresh rate is far less than 90p HZ. However, the existing VR games only have relatively poor modeling, and there are not many options to try.
Source: VRChat
2. Killer App still does not appear
The "not yet" of the Killer App has historical reasons for being trapped by hardware—even if Meta tries its best to squeeze profit margins, the MR headsets worth a few hundred dollars and the relatively simple ecology are richer than the existing ecology and the user base has reached a large scale The game console is still not attractive. The number of devices for VR is between 25-30 million, compared to 350 million for AAA games (PS 5, Xbox, Switch, PC). Therefore, most manufacturers have given up supporting VR, and the few games that support VR devices are also "laying out the VR platform by the way", rather than "only supporting VR devices". In addition, due to the problems mentioned in the first point, such as pixels, dizziness, poor battery life, and heavy weight, the experience of VR devices is not better than that of traditional 3A game terminals. As for the "immersion" advantage that VR proponents try to emphasize, due to the lack of equipment inventory, developers who "lay out VR equipment by the way" seldom design experience and interaction modes specifically for VR, making it difficult to achieve the ideal experience.
Therefore, the current situation is that when players choose VR games instead of non-VR games, they not only "choose a new game", but also "give up the experience of socializing with most of their friends", which is often the case of games. Sexuality and immersive experience are far greater than sociality. Of course, you may mention VR Chat, but if you dig deeper, you will find that 90% of the users are not VR users, but players who want to experience socializing with new friends in various Avatars in front of ordinary screens. So it’s no surprise that the most popular games in VR software are audio games like Rhythm Light.
Therefore, we believe that the emergence of the Killer App requires the following elements:
Great improvements in hardware performance and all-around detail. As mentioned in "hardware is not ready", this is not a simple operation such as "improving the screen, improving the chip, improving the speaker...", but the result of the all-round cooperation of chips, accessories, interaction design and operating system-and this is exactly Apple is good at: Compared with the iPod and iPhone more than ten years ago, Apple has completed the collaboration of multiple device operating systems with decades of accumulation.
The eve of the outbreak of user equipment ownership. Just like the above analysis of the mentality of developers and users, the problem of "chicken or egg" is that it is difficult for the Killer App to appear when the MAU of XR devices is only a few million. At the peak of "The Legend of Zelda: Breath of the Wild", game cartridge sales in the United States were even higher than Switch ownership-this is "an excellent case of how nascent hardware enters mass adoption. Buy to experience XR People with the device will gradually be disappointed because of the limited experience content, talking about how their head-mounted display has fallen into ashes; but most of the players who are attracted by Zelda will explore more other games in the Switch ecosystem. game and stay.
Source: The Verge
And, unified operating habits, and relatively stable device update compatibility. The former is easy to understand - with handle and without handle, it brings two kinds of behavior habits and experience for users to interact with the machine, and this is what distinguishes Apple Vision Pro from other VR devices on the market. The latter can be seen in the iteration of Oculus hardware - a large increase in hardware performance within the same generation will limit the user experience. The Meta Quest Pro, which will be released in 2022, has a substantial improvement in hardware performance compared to the Oculus Quest 2 (aka Meta Quest 2), which will be released in 2020: Quest Pro's resolution has been increased from Quest 2's 4K display to 5.25K, color The contrast ratio has been increased by 75%, and the refresh rate has been increased from the original 90 HZ to 120 HZ. Adding 8 external cameras to Quest 2's 4 cameras for understanding the external environment in VR, turning black-and-white environmental images into color, significantly improving hand tracking, and adding facial and eye features department tracking. At the same time, Quest Pro also uses "foveated rendering" to concentrate computing power on the place where the eyeballs are gazing and weaken the fidelity of other parts, thereby saving computing power and power consumption. As mentioned above, Quest Pro is much more powerful than Quest 2, but probably less than 5% of Quest Pro users use Quest 2. This means that developers will be developing games for both devices at the same time - which will greatly limit the use of Quest Pro's advantages, and in turn reduce the attractiveness of Quest Pro to users. History Rhymes, the same story has happened again and again in game consoles. This is why console manufacturers update hardware and software every 6-8 years. Users who bought the first generation of Switch will not worry about the follow-up Switch OLED and other hardware. Bringing the incompatibility of newly launched game software, but users who buy the Wii series cannot play games in the Switch ecosystem. For software developers targeting console games, the games they produce are not for mobile phones, which have a huge user base (350 million vs. billions) and user dependence (idle at home vs. 24/7) , requires a stable hardware experience within several development cycles to avoid excessive diversion of users, or, just like the current VR software developers, backward compatibility to ensure a sufficient user base.
So, can Vision Pro solve the above problems? How will it change the industry?
A turnaround with Vision Pro
At the press conference on June 7, Apple Vision Pro was released. According to the framework of "MR challenges encountered in hardware and software" we analyzed above, the following analogy can be made:
hardware:
Visually, Vision Pro uses two 4K screens with a total of about 6K pixels, which is the second-best match for current MR devices. The refresh rate can support up to 96 HZ, and it supports HDR video playback. According to the experience of technology bloggers, not only is the definition high, but almost no dizziness is felt at all.
In terms of hearing, Apple has used spatial audio on Airpods since 2020, which allows users to hear sounds from different directions to achieve a stereoscopic audio experience. But Vision Pro is expected to go one step further, using "audio ray technology", fully integrating LiDAR scanning in the device, analyzing the acoustic characteristics (physical materials, etc.) in the room, and then creating a "spatial audio effect" that matches the room, has direction and depth .
In terms of interaction, gestures and eye tracking without any handles make the interactive experience silky smooth to the extreme (according to the actual measurement experience of the technology media, the delay can hardly be felt, which is not only the sensor accuracy and calculation speed, but also the introduction of the eye path Prejudgment. More on that below.)
In terms of battery life, the battery life of Vision Pro is 2 hours, which is basically the same as that of Meta Quest Pro (not amazing, and it is also the point where Vision Pro is currently criticized. But because Vision Pro is an external power supply, and a 5000 mA battery is placed in the headset A small battery, it can be guessed that there is room for replacement power relay battery life).
In terms of weight, according to the experience of technology media, it is about 1 pound (454 g), which is basically the same as Pico and Oculus Quest 2, and should be lighter than Meta Quest Pro. It is a good experience in MR equipment (although this is not counted on the weight of the power supply chained around the waist). But compared to pure AR glasses (such as Nreal, Rokid, etc.) with a weight of about 80 g, they are still heavy and stuffy. Of course, most pure AR glasses need to be connected to other devices and can only be used as an extended screen. In contrast, MR with its own chip and a real immersive experience may be a completely different experience.
In addition, in terms of hardware performance, Vision Pro is not only equipped with the M2 series chip with the highest performance for system and program operation, but also adds an R chip specially developed for MR screen, surrounding environment monitoring, eyeball and gesture monitoring, etc. 1 chip, used for MR proprietary display and interactive functions.
In terms of software, Apple can not only complete a certain degree of migration with its ecosystem of millions of developers, but also already has a series of ecological layouts with the release of AR Kit:
Back in 2017, Apple released AR Kit: a set of iOS device-compatible virtual reality development frameworks that allow developers to create augmented reality applications and take advantage of the hardware and software capabilities of iOS devices. VR Kit enables digital assets to interact with the real world under the camera by using the camera on the iOS device to create a map of the area, using CoreMotion data to detect things like the tabletop, the floor, and the position of the device in physical space — for example, your In Pokemon Go, you can see Pokémon buried in the ground and parked in trees, instead of being displayed on the screen and moving with the camera. Users don't need to do any calibration for this - it's a seamless AR experience.
In 2017, AR Kit was released, which can automatically detect the location, topology, and user's facial expressions for modeling and expression capture.
In 2018, AR Kit 2 was released, bringing a better CoreMotion experience, making it possible to play multiplayer AR games, track 2D images, and detect known 3D objects such as sculptures, toys, and furniture.
In 2019, AR Kit 3 was released, adding further augmented reality features that can use People Occlusion to display AR content in front of or behind people, and it can track up to three faces. Collaborative sessions can also be supported for a new AR shared gaming experience. Motion capture can be used to understand body position and movement, and track joints and bones, enabling new AR experiences that involve people rather than just objects.
In 2020, AR Kit 4 was released, which can take advantage of the built-in LiDAR sensor on the 2020 iPhone and iPad to improve tracking and object detection. ARKit 4 also adds Location Anchors, which use Apple Maps data to place augmented reality experiences at specific geographic coordinates.
In 2021, AR Kit 5 will be released, allowing developers to build custom shaders, procedural mesh generation, object capture, and character control. Additionally, objects can be captured using built-in APIs as well as LiDAR and cameras in iOS 15 devices. Developers can scan an object and instantly convert it to a USDZ file that can be imported into Xcode and used as a 3D model in your ARKit scene or app. This greatly improves the production efficiency of 3D models.
In 2022, AR Kit 6 will be released. The new version of ARKit includes the "MotionCapture" function, which can track people in the video frame and provide developers with a character "skeleton" that can predict the position of the human head and limbs, thereby supporting development The user can create an application to superimpose AR content on the character, or hide it behind the character, so that it can be more realistically integrated with the scene.
Looking back at the layout of AR Kit that started seven years ago, it can be seen that Apple’s accumulation of AR technology did not happen overnight, but subtly integrated the AR experience into devices that have been widely spread. When Vision Pro was released, Apple A certain amount of content and developer accumulation has been completed. At the same time, due to the compatibility of AR Kit development, the developed products are not only aimed at Vision Pro users, but also adapt to iPhone and iPad users to a certain extent. Developers may not need to be limited by the ceiling of 3 million monthly active users to develop products, but potentially test and experience with hundreds of millions of iPhone and iPad users.
In addition, the 3D video capture of Vision Pro also partially solves the problem of limited MR content today: content production. Most of the existing VR videos are 1440p, which appear to have poor pixels in the circular screen experience of MR headsets. However, the shooting of Vision Pro has both high-pixel spatial video and good spatial audio experience, which may greatly enhance the MR experience. Content consumption experience.
Although the above configuration is quite shocking, the imagination of Apple MR does not stop there: On the day Apple MR was released, @sterlingcrispin, a developer who claimed to have participated in Apple's neuroscience, said:
Generally as a whole, a lot of the work I did involved detecting the mental state of users based on data from their body and brain when they were in immersive experiences.
In general, a lot of the work I do involves detecting the mental state of users through their physical and brain data in immersive experiences.
So, a user is in a mixed reality or virtual reality experience, and AI models are trying to predict if you are feeling curious, mind wandering, scared, paying attention, remembering a past experience, or some other cognitive state. And these may be inferred through measurements like eye tracking, electrical activity in the brain, heart beats and rhythms, muscle activity, blood density in the brain, blood pressure, skin conductance etc.
The user is in a mixed reality or virtual reality experience, and the AI model tries to predict whether they are curious, absent-minded, fearful, focused, remembering past experiences, or other cognitive states. These states can be measured through eye tracking, brain electrical activity, heartbeat and rhythm, muscle activity, brain blood density, blood pressure, skin conductance, and more.
There were a lot of tricks involved to make specific predictions possible, which the handful of patents I’m named on go into detail about. One of the coolest results involved predicting a user was going to click on something before they actually did. That was a ton of work and something I’m proud of. Your pupil reacts before you click in part because you expect something will happen after you click. So you can create biofeedback with a user’s brain by monitoring their eye behavior, and redesigning the UI in real time to create more of this anticipatory pupil response. It’s a crude brain computer interface via the eyes, but very cool. And I’d take that over invasive brain surgery any day.
To achieve a particular prediction, we use a number of tricks that are detailed in several patents to my name. One of the coolest results is predicting when a user will click on an object before they actually click. It's a tough job and I'm proud of it. Your pupils respond before you click, partly because you expect something to happen after you click. Therefore, by monitoring the user's eye movement behavior and redesigning the user interface in real time, biofeedback with the user's brain can be performed to create more anticipatory pupillary responses. It's a crude brain-computer interface through the eyes, and it's pretty cool. I would prefer this to invasive brain surgery.
Other tricks to infer cognitive state involved quickly flashing visuals or sounds to a user in ways they may not perceive, and then measuring their reaction to it.
Other techniques for inferring cognitive state include rapidly flashing a sight or sound in ways the user may not be aware of, and measuring their response to it.
Another patent goes into details about using machine learning and signals from the body and brain to predict how focused, or relaxed you are, or how well you are learning. And then updating virtual environments to enhance those states. So, imagine an adaptive immersive environment that helps you learn, or work, or relax by changing what you’re seeing and hearing in the background.
Another patent details the use of machine learning and signals from the body and brain to predict how focused, relaxed or learning you are, and update the virtual environment based on those states. So imagine an adaptive immersive environment that helps you study, work or relax by changing what you see and hear in the background.
These technologies, highly relevant to neuroscience, may mark a new way for machines and human will to synchronize.
Of course, Vision Pro is not without flaws. For example, its sky-high price of $3499 is more than twice that of Meta Quest Pro and more than seven times that of Oculus Quest 2. In this regard, Runway CEO Siqi Chen said:
it might be useful to remember that in inflation adjusted dollars, the apple vision pro is priced at less than half the original 1984 macintosh at launch (over $ 7 K in today’s dollars)
As you may recall, the Apple Vision Pro was priced at less than half what the Macintosh was when it was introduced in 1984 (equivalent to over $7000 today) in inflation-adjusted US dollars.
Under such an analogy, the pricing of Apple Vision Pro does not seem too outrageous... However, the sales volume of the first generation of Macintosh was only 372,000 units. It is hard to imagine that Apple, which has worked hard on MR, can accept a similar embarrassing situation— —The reality may not change a lot in a few years. AR does not necessarily need glasses, and it is difficult to popularize Vision Pro in a short period of time. It is likely to be only used as a tool for developers to experience and test, a production tool for creators, and digital enthusiasts expensive toys.
Source: Google Trend
Nevertheless, we can see that Apple's MR equipment has begun to stir up the market, shifting the appeal of ordinary users to digital products to MR, and making the public realize that MR is more mature and no longer a ppt/presentation Video products. Let users realize that besides tablets, TVs, and mobile phones, there is an option to wear immersive displays; let developers realize that MR may truly become a new trend in next-generation hardware; let VCs realize that this may It is an investment field with a very high ceiling.
Web 3 and related ecology
1. 3D Rendering + AI Concept Target: RNDR
Introduction to RNDR
In the past six months, RNDR has been a meme combining the three concepts of Metaverse, AI, and MR, and has led the market many times.
The project behind RNDR is Render Network, a protocol for distributed rendering using a decentralized network. OTOY.Inc, the company behind Render Network, was founded in 2009 and its rendering software, OctaneRender, is optimized for GPU rendering. For ordinary creators, local rendering takes up a lot of machines, which creates a demand for cloud rendering, but if you rent servers from AWS, Azure and other manufacturers for rendering, the cost may also be higher—this is The Render Network was born. Rendering is not limited to hardware conditions. It connects creators and ordinary users with idle GPUs, allowing creators to render cheaply, quickly and efficiently, and node users can use idle GPUs to earn pocket money.
For Render Network, participants have two identities:
Creator: Post a task and use legal currency to purchase Credit, or RNDR for payment. (Octane X for publishing tasks is available on Mac and iPad, 0.5-5% of the fee will cover network costs.)
Node provider (idle GPU owner): idle GPU owners can apply to become a node provider, and decide whether to get priority matching based on the reputation of previously completed tasks. After the node finishes rendering, the author will view the rendered file and download it. Once downloaded, the fee locked in the smart contract will be sent to the wallet of the node provider.
The tokenomics of RNDR was also changed in February this year, which is one of the reasons for its price increase (but until the article was published, Render Network has not applied the new tokenomics to the network, and has not yet given the specific launch time):
Previously, in the network, the purchasing power of $RNDR was the same as that of Credit, and 1 credit = 1 euro. When the price of $RNDR is less than 1 euro, it is more cost-effective to buy $RNDR than to buy Credit with fiat currency, but when the price of $RNDR rises to more than 1 euro, because everyone tends to buy with fiat currency, $RNDR will lose its use case Condition. (Although the income from the agreement may be used to repurchase $RNDR, other players in the market have no incentive to buy $RNDR.)
The changed economic model adopts Helium's "BME" (Burn-Mint-Emission) model. When creators purchase rendering services, regardless of whether they use fiat currency or $RNDR, they will destroy $RNDR equivalent to 95% of the fiat currency value, and the remaining 5% Income that flows to the Foundation for use as an engine. When the node provides services, it no longer directly receives the creator’s income from purchasing rendering services, but receives newly minted token rewards. The basis for rewards is not only based on task completion indicators, but also other comprehensive factors such as customer satisfaction.
It is worth noting that for each new epoch (specific time period, the specific duration has not been specified), new $RNDR will be minted, and the amount of minting is strictly limited and will decrease over time, regardless of the number of tokens burned (details See the release document for the official white paper). Therefore, it will bring changes in the distribution of benefits to the following Stakeholders:
Creator/Network service user: In each epoch, part of the RNDR consumed by the creator will be returned, and the proportion will gradually decrease over time.
Node runners: Node runners will be rewarded based on factors such as workload completed and real-time online activity.
Liquidity providers: Dex's liquidity providers will also be rewarded to ensure sufficient $RNDR for burning.
Source:
Compared with the previous income (irregular) repurchase mode, under the new mode, when the demand for rendering tasks is insufficient, miners can get more income than before, and the total task price corresponding to the demand for rendering tasks is greater than the released $RNDR When the total amount of rewards is increased, miners will receive less income than the original model (tokens burned > newly minted tokens), and $RNDR tokens will also enter a deflationary state.
Although $RNDR has enjoyed a gratifying rise in the past six months, the business situation of Render Network has not increased significantly like the currency price: the number of nodes has not fluctuated significantly in the past two years, and the monthly $RNDR allocated to nodes has not increased significantly, but the rendering The number of tasks has indeed increased—it can be seen that the tasks assigned by creators to the network have gradually moved from a single large amount to multiple small amounts).
Although it can't keep up with the five-fold increase in currency prices in a year, the GMV of Render Network has indeed ushered in a relatively large growth. In 2022, GMV (Gross Merchandise Value, total transaction value) will increase by 70% compared with last year. According to the total amount of $RNDR allocated to nodes on the Dune Kanban, the GMV in the first half of 2023 is about $1.19 M, which is basically no increase compared to the same period in 2022. Such GMV is obviously not enough for the $700 million mCap.
Source: Potential impact of the launch of RNDR
Introduction of Vision Pro impact on RNDR
In a Medium article published on June 10, Render Network claims that Octane's rendering capabilities for the M 1 and M 2 are unique - since the Vision Pro also uses the M 2 chip, rendering in the Vision Pro won't be the same as a normal M 2 chip. Desktop rendering is different.
But the question is: why publish rendering tasks on a device with a 2-hour battery life that is mainly used for experience and play, not a productivity tool? If the price of Vision Pro is lowered, the battery life is greatly improved, the weight is reduced, and Mass Adoption is truly realized, it may be time for Octane to play a role...
It can be confirmed that the migration of digital assets from flat devices to MR devices will indeed bring about an increase in demand for infrastructure. Announcing the cooperation with Apple to study how to create a game engine Unity that is more suitable for Vision Pro, the stock price rose 17% on the day, which also shows the optimistic sentiment of the market. With the cooperation between Disney and Apple, the 3D transformation of traditional film and television content may usher in similar demand growth. Render Network, which specializes in film and television rendering, launched NeRFs, a 3D rendering technology combined with AI, in February this year, using artificial intelligence computing and 3D rendering to create real-time immersive 3D assets that can be viewed on MR devices – in the Apple AR Kit With support, anyone can perform Photoscan on objects with a higher configuration iPhone to generate 3D assets, while NeRF technology uses AI-added rendering to render the simple Photoscan 3D into different angles that can refract different lights Immersive 3D assets - this kind of spatial rendering will be an important tool for MR device content production, providing potential demand for Render Network.
But will this need be met by RNDR? Looking at its GMV of 2 million US dollars in 2022, it is a drop in the bucket compared to the cost of the film and television industry. To sum up, RNDR may of course continue to use the meme of the "metaverse, XR, AI" track to create another brilliant price when the track is hot, but it is still difficult to generate income that matches the valuation.
2. Metaverse – Otherside, Sandbox, Decentraland, HighStreet, etc.
Although I think the substantial fundamental changes are limited – but MR-related topics seem to be inseparable from these large metaverse projects, Monkey’s Otherside, Animoca’s The Sandbox, the oldest blockchain Metaverse Decentraland, And Highstreet who wants to be Shopify in the VR world. (Refer to the 4. Business Analysis – Industry Analysis and Potential section for a detailed analysis of the metaverse track)
But as analyzed above in "Killer App has not yet appeared", most of the existing VR-supporting developers do not "only support VR" (even if they only support VR and are the industry leaders, at the level of a million In the MAU market segment, it is not a competitive level to achieve the top level), and the existing products have not been carefully adapted to the user habits and operation interactions of MR. The projects that have not yet been launched are actually standing on the starting line not far from all other major manufacturers and start-up companies that see the potential of Vision Pro: after the better combination of Unity and Vision Pro, MR ecological game development The cost of learning is expected to be reduced, and the experience accumulated in a relatively narrow market in the past is difficult to reuse in a product that is about to go to mass adoption.
Of course, if we want to talk about first-mover advantages, projects that have already deployed VR may of course have weak development progress, technology and talent accumulation advantages.
One More Thing
If you haven't watched the following video, then this will be your most intuitive feeling about the MR world: convenient and immersive, but chaotic and disorderly. The virtual and the real are merging so seamlessly that people spoiled by virtual reality see “losing their identity on the device” as apocalyptic. The details in the video still feel a bit sci-fi and incomprehensible to us now-but this is likely to be the future we are about to face in a few years.
This reminds me of another video. In 2011, that is, 12 years ago, Microsoft released Windows Phone 7 (as a Gen Z with little memory of that era, it is hard to think that Microsoft has also worked hard on mobile phones) , and made a satirical ad about smartphones "Really?": People in the ad hold their phones tightly all the time, ride a bicycle while staring at their phones, take a sunbath on a beach while staring at their phones, take a shower with their phones tightly At the banquet, I fell down the stairs because I watched the mobile phone, and even dropped the mobile phone into the urinal because of distraction... Microsoft's original intention was to show users that "the mobile phone released by Microsoft will save us from mobile phone addiction"-this Of course it was a failed attempt, and the name of this "Really?" ad could even be changed to "Reality". The "sense of presence" and intuitive interaction design of smartphones are more addictive than the anti-human "mobile version of windows computer", just like the reality of the combination of virtual and real is more addictive than pure reality.
How to grasp such a future? We have several directions we are exploring:
Immersive Experiences and Storytelling Creation: Video First, Shooting Movies "with 3D Depth" Has Never Been Easier Following the Vision Pro Release, Which Will Change How People Consume Digital Content - From "distance appreciation" to "immersive experience". In addition to video shooting, "3D space with content experience" may be another track worthy of attention. This does not mean to randomly build the same scene from the template library, or a few seemingly explorable spaces extracted from the game, but a space with "interactive, native content, and 3D more friendly" experience. Such a space may be a handsome piano instructor who also sits on the piano bench, highlights the corresponding keys, and gently encourages you when you are depressed; it may be an elf who hides the game key to the next level in the corner of your room; It can also be an empathetic virtual girlfriend who walks with players... The creator economy created here can use the bottom layer of the blockchain to de-trust, automatic settlement, asset-based digital content, and low-wear transactions. Creators can better use them to interact with fans without intermediation, without the hassle of registering a company and setting up Stripe to receive payments, and without giving the platform a 10% (Substack)-70% (Roblox) share, or even Worried about whether the platform will go bankrupt and take away your hard work... A wallet, a composable content platform, and decentralized storage can solve the problem. Similar upgrades will take place in games and social spaces, and it can even be said that the boundaries between games, movies, and social spaces will become increasingly blurred: when the experience is no longer a large screen suspended a few meters away, but close in front of you, with depth , audio interaction with distance and spatial sense, the player is no longer a "watching" audience, but a character who participates in the scene, and even the action will affect the virtual world environment (for example, if you raise your hand in the jungle, butterflies will fly to your fingertip).
Infra and community of 3D digital assets: Vision Pro's 3D shooting function will greatly reduce the difficulty of 3D video creation, thus giving birth to a new content production and consumption market. The corresponding upstream and downstream infra such as material trading and editing may continue to be dominated by existing giants, or it may be opened up by start-up companies like AIGC.
Hardware/software upgrades to enhance the immersive experience: Whether it is the "more detailed observation of the human body to create an adaptive environment" that Apple is studying, or the addition of tactile, taste and other immersive experiences, it is a track with considerable potential.
Of course, there is a high probability that entrepreneurs in this field will have a deeper understanding, thinking and more creative exploration than us - welcome DM @0 x scarlettw to communicate and explore the possibility of the spatial computing era.
Acknowledgments and References:
Thanks to @fanyayun, partner at Mint Ventures, and @xuxiaopengmint, research partner, for their advice, review, and proofreading during the writing of this article. The XR analysis framework comes from @ballmatthew's series of articles, Apple WWDC and developer courses, and the author's experience with various XR devices on the market.
fc 6336 b 5 a 0337 d 489 d 6 eaf 7 ae 486 e 621
0 x 30 bF 18409211 FB 048 b 8 Abf 44 c 27052 c 93 cF 329 F 2 /6xR2nFi- Q 5 WdXIDZpEga 4 xS 3 m 3 AZ 61 hXyu 6 dzIEBb_E
View Original
The content is for reference only, not a solicitation or offer. No investment, tax, or legal advice provided. See Disclaimer for more risks disclosure.
Apple Vision Pro Release Full Moon Rethinking: XR, RNDR, and the Future of Spatial Computing
Author: Scarlett Wu
In the early hours of WWDC (Apple Worldwide Developers Conference) on June 6th, it was also the fifth day after I found out that I was suffering from COVID-19. Postponed again?
So when Cook appeared at two o'clock in the morning, he waved "One More Thing", and my friends and I cheered together on this side of the screen:
As a cutting-edge tech enthusiast, I cheer for the new toys I can own next year, but as a Web 3 investor focused on games, the metaverse, and AI, this is a sign of a new era that makes me shudder.
You may wonder, "What does the upgrade of MR hardware have to do with Web 3?" Well, let's start with Mint Ventures' Thesis on the metaverse track.
Our metaverse, or the Thesis of the Web 3 world
The asset premium in the blockchain world comes from:
Goods in the virtual world are easier to securitize than physical goods:
So, what's next? Tradable virtual world assets will be a theme we're always bullish on.
With the improvement of virtual world experience, people's immersion time in the virtual world will become longer and longer, which will bring about a shift in attention. The shift of attention will also bring about a shift in valuation premium from strong attachment to entities to virtual assets. The release of Apple Vision Pro will completely change the experience of humans interacting with the virtual world, thereby bringing about an increase in the immersion time of the virtual world and a substantial improvement in the immersion experience.
*Note: This is our variant definition of the pricing strategy. In the premium pricing strategy, the brand sets the price in a price range that is much higher than the cost, and fills the brand story and experience in the difference between the price and the cost. In addition, cost pricing, competitive pricing, supply and demand relationship, etc. are also factors that will be considered when pricing commodities, and only premium pricing will be expanded here. *
MR industry history and present
The exploration of XR (Extended Reality, including VR and AR) in modern society began more than ten years ago:
When we look back at the history of XR, we can see that the expansion and cultivation of this industry is far beyond the expectations of everyone in the market, whether it is a technology giant with a lot of money and a lot of scientists, or a smart and capable start-up company that has raised hundreds of millions of dollars and focuses on XR . Since the release of the consumer VR product Oculus Rift in 2016, all VR brands, such as Samsung’s Gear, Byte’s Pico, Valve’s Index, Sony’s Playstation VR, and HTC’s Vive, have shipped less than 45 million units. Since the most widely used VR devices are still games, AR devices that people are willing to use occasionally did not appear before the release of Vision Pro. According to the data of SteamVR, it can be roughly inferred that the monthly active users of VR devices may only have a small millions.
Why are XR devices not gaining popularity? The failure experience of countless start-up companies and the summary of investment institutions can give some answers:
1. The hardware is not Ready
Visually, VR devices have a wider viewing angle and are closer to the eyeballs. Even with the most advanced devices, the pixels on the screen are still difficult to ignore. 4K for one eye, or 8K for both eyes, is required for full immersion. In addition, the refresh rate is also a core element in maintaining the visual experience. It is generally believed in the market that in order to achieve the anti-dizziness effect, XR devices need 120 HZ or even 240 HZ per second to maintain a similar experience to the real world. And the refresh rate, under the same computing power, is an element that needs to be balanced with the rendering level: Fortnite supports 4k resolution at a refresh rate of 60 HZ, but only supports 1440p resolution at a refresh rate of 120 HZ .
Because compared with visual intuition, hearing seems to be worthless in a short time, and most VR devices have not worked hard on this detail. But imagine that in a space, no matter whether it is a person on the left or the right, the voice of speaking is transmitted from the top of the head steadily, which will greatly reduce the sense of immersion. And when the digital Avatar in the AR space is fixed in the living room, when the player walks from the bedroom to the living room, the volume of the Avatar's speech is the same, which will subtly reduce the realism of the space.
In terms of interaction, traditional VR devices are equipped with control handles, and for example, HTC Vive needs to install cameras at home to confirm the player's movement status. Although Quest Pro has eye tracking, it has high latency and average sensitivity. It is mainly used for local rendering enhancement, and the actual interactive operation is still dominated by handles. At the same time, Oculus also installed 4-12 cameras on the head display to confirm the state of the scene where the user is in, to achieve a certain degree of gesture interaction experience (for example, in the VR world, use the left hand to pick up a virtual phone, and the right index finger is empty. Click OK to start the game).
In terms of weight, the quality of the equipment that makes the human body feel comfortable should be between 400-700 g (although compared to normal glasses of about 20 g, this is still a huge monster). But in order to achieve the above-mentioned clarity, refresh rate, level of interaction, computing power (chip performance, size and quantity) matching its rendering requirements, and hours of basic battery life requirements, the weight of the XR device is a difficult trade-off process.
To sum up, if XR is to become a next-generation mobile phone and become a new generation of mass hardware, a device with a resolution above 8k and a refresh rate greater than 120 HZ is required to avoid dizziness for users. This device should have more than a dozen cameras, a battery life of 4 hours or more (only need to be removed during lunch/dinner break), no or little heat generation, weight less than 500 g, and price as low as 500-1000 US dollars. Although the current technical strength has improved a lot compared to the last wave of XR boom in 15-19 years, it is still difficult to meet the above standards.
But even so, if users start to experience the existing MR (VR + AR) equipment, they will find that although the current experience is not perfect, it is also an immersive experience that 2D screens cannot match. But there is still considerable room for improvement in this experience-taking Oculus Quest 2 as an example, most of the VR videos that can be watched are 1440p, which does not even reach the resolution limit of Quest 2 4K, and the refresh rate is far less than 90p HZ. However, the existing VR games only have relatively poor modeling, and there are not many options to try.
2. Killer App still does not appear
The "not yet" of the Killer App has historical reasons for being trapped by hardware—even if Meta tries its best to squeeze profit margins, the MR headsets worth a few hundred dollars and the relatively simple ecology are richer than the existing ecology and the user base has reached a large scale The game console is still not attractive. The number of devices for VR is between 25-30 million, compared to 350 million for AAA games (PS 5, Xbox, Switch, PC). Therefore, most manufacturers have given up supporting VR, and the few games that support VR devices are also "laying out the VR platform by the way", rather than "only supporting VR devices". In addition, due to the problems mentioned in the first point, such as pixels, dizziness, poor battery life, and heavy weight, the experience of VR devices is not better than that of traditional 3A game terminals. As for the "immersion" advantage that VR proponents try to emphasize, due to the lack of equipment inventory, developers who "lay out VR equipment by the way" seldom design experience and interaction modes specifically for VR, making it difficult to achieve the ideal experience.
Therefore, the current situation is that when players choose VR games instead of non-VR games, they not only "choose a new game", but also "give up the experience of socializing with most of their friends", which is often the case of games. Sexuality and immersive experience are far greater than sociality. Of course, you may mention VR Chat, but if you dig deeper, you will find that 90% of the users are not VR users, but players who want to experience socializing with new friends in various Avatars in front of ordinary screens. So it’s no surprise that the most popular games in VR software are audio games like Rhythm Light.
Therefore, we believe that the emergence of the Killer App requires the following elements:
So, can Vision Pro solve the above problems? How will it change the industry?
A turnaround with Vision Pro
At the press conference on June 7, Apple Vision Pro was released. According to the framework of "MR challenges encountered in hardware and software" we analyzed above, the following analogy can be made:
hardware:
In terms of software, Apple can not only complete a certain degree of migration with its ecosystem of millions of developers, but also already has a series of ecological layouts with the release of AR Kit:
Back in 2017, Apple released AR Kit: a set of iOS device-compatible virtual reality development frameworks that allow developers to create augmented reality applications and take advantage of the hardware and software capabilities of iOS devices. VR Kit enables digital assets to interact with the real world under the camera by using the camera on the iOS device to create a map of the area, using CoreMotion data to detect things like the tabletop, the floor, and the position of the device in physical space — for example, your In Pokemon Go, you can see Pokémon buried in the ground and parked in trees, instead of being displayed on the screen and moving with the camera. Users don't need to do any calibration for this - it's a seamless AR experience.
Looking back at the layout of AR Kit that started seven years ago, it can be seen that Apple’s accumulation of AR technology did not happen overnight, but subtly integrated the AR experience into devices that have been widely spread. When Vision Pro was released, Apple A certain amount of content and developer accumulation has been completed. At the same time, due to the compatibility of AR Kit development, the developed products are not only aimed at Vision Pro users, but also adapt to iPhone and iPad users to a certain extent. Developers may not need to be limited by the ceiling of 3 million monthly active users to develop products, but potentially test and experience with hundreds of millions of iPhone and iPad users.
In addition, the 3D video capture of Vision Pro also partially solves the problem of limited MR content today: content production. Most of the existing VR videos are 1440p, which appear to have poor pixels in the circular screen experience of MR headsets. However, the shooting of Vision Pro has both high-pixel spatial video and good spatial audio experience, which may greatly enhance the MR experience. Content consumption experience.
Although the above configuration is quite shocking, the imagination of Apple MR does not stop there: On the day Apple MR was released, @sterlingcrispin, a developer who claimed to have participated in Apple's neuroscience, said:
These technologies, highly relevant to neuroscience, may mark a new way for machines and human will to synchronize.
Of course, Vision Pro is not without flaws. For example, its sky-high price of $3499 is more than twice that of Meta Quest Pro and more than seven times that of Oculus Quest 2. In this regard, Runway CEO Siqi Chen said:
Under such an analogy, the pricing of Apple Vision Pro does not seem too outrageous... However, the sales volume of the first generation of Macintosh was only 372,000 units. It is hard to imagine that Apple, which has worked hard on MR, can accept a similar embarrassing situation— —The reality may not change a lot in a few years. AR does not necessarily need glasses, and it is difficult to popularize Vision Pro in a short period of time. It is likely to be only used as a tool for developers to experience and test, a production tool for creators, and digital enthusiasts expensive toys.
Nevertheless, we can see that Apple's MR equipment has begun to stir up the market, shifting the appeal of ordinary users to digital products to MR, and making the public realize that MR is more mature and no longer a ppt/presentation Video products. Let users realize that besides tablets, TVs, and mobile phones, there is an option to wear immersive displays; let developers realize that MR may truly become a new trend in next-generation hardware; let VCs realize that this may It is an investment field with a very high ceiling.
Web 3 and related ecology
1. 3D Rendering + AI Concept Target: RNDR
Introduction to RNDR
In the past six months, RNDR has been a meme combining the three concepts of Metaverse, AI, and MR, and has led the market many times.
The project behind RNDR is Render Network, a protocol for distributed rendering using a decentralized network. OTOY.Inc, the company behind Render Network, was founded in 2009 and its rendering software, OctaneRender, is optimized for GPU rendering. For ordinary creators, local rendering takes up a lot of machines, which creates a demand for cloud rendering, but if you rent servers from AWS, Azure and other manufacturers for rendering, the cost may also be higher—this is The Render Network was born. Rendering is not limited to hardware conditions. It connects creators and ordinary users with idle GPUs, allowing creators to render cheaply, quickly and efficiently, and node users can use idle GPUs to earn pocket money.
For Render Network, participants have two identities:
The tokenomics of RNDR was also changed in February this year, which is one of the reasons for its price increase (but until the article was published, Render Network has not applied the new tokenomics to the network, and has not yet given the specific launch time):
Previously, in the network, the purchasing power of $RNDR was the same as that of Credit, and 1 credit = 1 euro. When the price of $RNDR is less than 1 euro, it is more cost-effective to buy $RNDR than to buy Credit with fiat currency, but when the price of $RNDR rises to more than 1 euro, because everyone tends to buy with fiat currency, $RNDR will lose its use case Condition. (Although the income from the agreement may be used to repurchase $RNDR, other players in the market have no incentive to buy $RNDR.)
The changed economic model adopts Helium's "BME" (Burn-Mint-Emission) model. When creators purchase rendering services, regardless of whether they use fiat currency or $RNDR, they will destroy $RNDR equivalent to 95% of the fiat currency value, and the remaining 5% Income that flows to the Foundation for use as an engine. When the node provides services, it no longer directly receives the creator’s income from purchasing rendering services, but receives newly minted token rewards. The basis for rewards is not only based on task completion indicators, but also other comprehensive factors such as customer satisfaction.
It is worth noting that for each new epoch (specific time period, the specific duration has not been specified), new $RNDR will be minted, and the amount of minting is strictly limited and will decrease over time, regardless of the number of tokens burned (details See the release document for the official white paper). Therefore, it will bring changes in the distribution of benefits to the following Stakeholders:
Compared with the previous income (irregular) repurchase mode, under the new mode, when the demand for rendering tasks is insufficient, miners can get more income than before, and the total task price corresponding to the demand for rendering tasks is greater than the released $RNDR When the total amount of rewards is increased, miners will receive less income than the original model (tokens burned > newly minted tokens), and $RNDR tokens will also enter a deflationary state.
Although $RNDR has enjoyed a gratifying rise in the past six months, the business situation of Render Network has not increased significantly like the currency price: the number of nodes has not fluctuated significantly in the past two years, and the monthly $RNDR allocated to nodes has not increased significantly, but the rendering The number of tasks has indeed increased—it can be seen that the tasks assigned by creators to the network have gradually moved from a single large amount to multiple small amounts).
Although it can't keep up with the five-fold increase in currency prices in a year, the GMV of Render Network has indeed ushered in a relatively large growth. In 2022, GMV (Gross Merchandise Value, total transaction value) will increase by 70% compared with last year. According to the total amount of $RNDR allocated to nodes on the Dune Kanban, the GMV in the first half of 2023 is about $1.19 M, which is basically no increase compared to the same period in 2022. Such GMV is obviously not enough for the $700 million mCap.
Introduction of Vision Pro impact on RNDR
In a Medium article published on June 10, Render Network claims that Octane's rendering capabilities for the M 1 and M 2 are unique - since the Vision Pro also uses the M 2 chip, rendering in the Vision Pro won't be the same as a normal M 2 chip. Desktop rendering is different.
But the question is: why publish rendering tasks on a device with a 2-hour battery life that is mainly used for experience and play, not a productivity tool? If the price of Vision Pro is lowered, the battery life is greatly improved, the weight is reduced, and Mass Adoption is truly realized, it may be time for Octane to play a role...
It can be confirmed that the migration of digital assets from flat devices to MR devices will indeed bring about an increase in demand for infrastructure. Announcing the cooperation with Apple to study how to create a game engine Unity that is more suitable for Vision Pro, the stock price rose 17% on the day, which also shows the optimistic sentiment of the market. With the cooperation between Disney and Apple, the 3D transformation of traditional film and television content may usher in similar demand growth. Render Network, which specializes in film and television rendering, launched NeRFs, a 3D rendering technology combined with AI, in February this year, using artificial intelligence computing and 3D rendering to create real-time immersive 3D assets that can be viewed on MR devices – in the Apple AR Kit With support, anyone can perform Photoscan on objects with a higher configuration iPhone to generate 3D assets, while NeRF technology uses AI-added rendering to render the simple Photoscan 3D into different angles that can refract different lights Immersive 3D assets - this kind of spatial rendering will be an important tool for MR device content production, providing potential demand for Render Network.
But will this need be met by RNDR? Looking at its GMV of 2 million US dollars in 2022, it is a drop in the bucket compared to the cost of the film and television industry. To sum up, RNDR may of course continue to use the meme of the "metaverse, XR, AI" track to create another brilliant price when the track is hot, but it is still difficult to generate income that matches the valuation.
2. Metaverse – Otherside, Sandbox, Decentraland, HighStreet, etc.
Although I think the substantial fundamental changes are limited – but MR-related topics seem to be inseparable from these large metaverse projects, Monkey’s Otherside, Animoca’s The Sandbox, the oldest blockchain Metaverse Decentraland, And Highstreet who wants to be Shopify in the VR world. (Refer to the 4. Business Analysis – Industry Analysis and Potential section for a detailed analysis of the metaverse track)
But as analyzed above in "Killer App has not yet appeared", most of the existing VR-supporting developers do not "only support VR" (even if they only support VR and are the industry leaders, at the level of a million In the MAU market segment, it is not a competitive level to achieve the top level), and the existing products have not been carefully adapted to the user habits and operation interactions of MR. The projects that have not yet been launched are actually standing on the starting line not far from all other major manufacturers and start-up companies that see the potential of Vision Pro: after the better combination of Unity and Vision Pro, MR ecological game development The cost of learning is expected to be reduced, and the experience accumulated in a relatively narrow market in the past is difficult to reuse in a product that is about to go to mass adoption.
Of course, if we want to talk about first-mover advantages, projects that have already deployed VR may of course have weak development progress, technology and talent accumulation advantages.
One More Thing
If you haven't watched the following video, then this will be your most intuitive feeling about the MR world: convenient and immersive, but chaotic and disorderly. The virtual and the real are merging so seamlessly that people spoiled by virtual reality see “losing their identity on the device” as apocalyptic. The details in the video still feel a bit sci-fi and incomprehensible to us now-but this is likely to be the future we are about to face in a few years.
This reminds me of another video. In 2011, that is, 12 years ago, Microsoft released Windows Phone 7 (as a Gen Z with little memory of that era, it is hard to think that Microsoft has also worked hard on mobile phones) , and made a satirical ad about smartphones "Really?": People in the ad hold their phones tightly all the time, ride a bicycle while staring at their phones, take a sunbath on a beach while staring at their phones, take a shower with their phones tightly At the banquet, I fell down the stairs because I watched the mobile phone, and even dropped the mobile phone into the urinal because of distraction... Microsoft's original intention was to show users that "the mobile phone released by Microsoft will save us from mobile phone addiction"-this Of course it was a failed attempt, and the name of this "Really?" ad could even be changed to "Reality". The "sense of presence" and intuitive interaction design of smartphones are more addictive than the anti-human "mobile version of windows computer", just like the reality of the combination of virtual and real is more addictive than pure reality.
How to grasp such a future? We have several directions we are exploring:
Of course, there is a high probability that entrepreneurs in this field will have a deeper understanding, thinking and more creative exploration than us - welcome DM @0 x scarlettw to communicate and explore the possibility of the spatial computing era.
Acknowledgments and References:
Thanks to @fanyayun, partner at Mint Ventures, and @xuxiaopengmint, research partner, for their advice, review, and proofreading during the writing of this article. The XR analysis framework comes from @ballmatthew's series of articles, Apple WWDC and developer courses, and the author's experience with various XR devices on the market.