In the context of metaverse wearables, semantic interoperability refers to sharing the same meaning of these wearables in two or more different environments. For example, the meaning could include the visual expression of a piece, its item attributes in the context of certain games, or gatekeeper functionality. This depends highly on the environment/context in which the wearable will be embedded in.
When comparing the graphic styles of Decentraland and Sandbox one can easily argue that the visual expression of e.g. a virtual dress is limited by the graphical constraints of each metaverse. Therefore, a visual transformation (e.g. from low-poly to voxel-based) of the wearable between different environments without losing its meaning is needed. This could be done by metaverses, by fashion creators, or by a 3rd party intermediary protocol, Gravity Layer being one of them. We strongly believe that this problem will be addressed by future companies to allow (semi) automatic transformation as a new application of AI models.
Syntactic interoperability simply refers to the packaging (format) and transmission mechanisms for the data that is then interpreted (providing meaning). The two different categories here are content/metadata and ownership. The content is mostly stored in 3D graphics formats that have been in use in the fashion industry, the visualization industry, and video games. Each format has its characteristics and capabilities. A common practice is to store the original content in a feature-rich format and to reduce information into a simpler format if needed. The metadata can follow the same principle. Both content and metadata should be readable and interpretable (understanding the format).
Let us shift the view on wearable ownership. To make sure that a specific wearable is owned by somebody we need to be able to verify its ownership. This is usually done by the user providing a signature of her/his wallet account to the entry point application. This assumes that the wearable exists as NFT on the same chain as the user account. However, other mechanisms to check validity, as we will see below in Decentraland’s Linked Wearables, are possible too.
Generally speaking, interoperability can be achieved by different stakeholders: creators, metaverses themselves (represented by DAOs), or third-party middleware companies that work on behalf of the creators or the metaverses. The market power dynamics would hereby influence where the drive to interoperability will stem from. We see that currently, creators are working on semantic interoperability: re-fitting a wearable to different contexts or graphic styles, and metaverses work on syntactic interoperability making it easier for creators to use their already existing NFT collection e.g. as we will discuss below in the Decentraland’s Linked Wearables section.
We have discussed interoperability from very abstract and conceptual fundamentals. With this, we can now look at a concrete metaverse and apply the framework to understand design choices and take a snapshot of the current progress. To allow anybody with basic crypto knowledge to follow the discussion we will start with the basics of wearables in Decentraland.
Decentraland wearables are concrete instances of items. Analogous to NFTs that have multiple editions, an item can have multiple editions, e.g. minting 500 shirts of a specific T-shirt item. These items can be bundled in collections for better organization. The underlying NFTs for each item exist on the Polygon sidechain. Here creators and members can mint, buy, sell or transfer items without paying much gas.
Each wearable can fall into the following categories that modify a certain part of the avatar: Body shape (shape of the entire character), Hat, Helmet, Hair, Facial hair, Head, Upper body (e.g. jacket or shirt), Lower body (e.g. pants or shorts), Feet, Skin. Also, accessories are possible that modify different parts: Mask, Eye Wear, Earring, Tiara (crown or something that sits on the head), and Top-head (something that is applied on the head, e.g. a halo).
The team behind Decentraland released 3D model templates for Blender, a popular modeling and animation tool, to fit wearables around different sections of the human avatar body. Wearables must adhere to content policy and technical restrictions, such as e.g. triangle count. After creation, Decentraland’s wearable editor can be used to preview, submit and manage wearables. In the submission process, the Curation Committee decides if the submitted collection will be accepted.
The next two sections will dive deeper into the technical aspects of achieving interoperability within Decentraland. Feel free to jump to the conclusion at the end.
Looking at the previous process of publishing wearables with an interoperability lens, we can easily identify where semantic and syntactic interoperability challenges might arise.
First, syntactic interoperability: the NFTs (the ownership logic of a certain wearable) on Polygon/Matic are specifically created for Decentraland. This limits the scope of interoperability for this concrete NFT only to Decentraland. Furthermore, this special Decentraland NFT has to exist on Polygon/Matic, therefore not allowing NFTs existing on other chains to have a wearable representation in Decentraland. The format used for 3D models and their animations is glTF (GL Transmission Format). The format has been chosen since it’s efficient and highly interoperable with modern web technologies. For Decentraland, animation has to be embedded into the glTF file. Furthermore, textures are either embedded or referenced.
In terms of semantic interoperability, Decentraland supports only a limited number of categories of wearables making two human body shapes the norm. A wearable on Otherside, worn by an ape, would therefore need to be re-fitted/translated to a Decentraland body shape. In the most extreme cases, wearables designed for one metaverse might not be able to be re-fitted to another, e.g. a fish metaverse to a human-avatar-based metaverse. The refitting is further obstructed by content policies and technical limitations of the respective metaverses. As an example of the former, one metaverse might care about copyright, but the other one doesn’t. As an example of the latter, one metaverse might only allow low-poly models whereas another metaverse allows high-resolution models. Bridging semantic gaps is not an easy task and involves context-dependent decision-making from the collection creators.