Abstract
The Spaces session discussed the challenges and opportunities of tokenizing real-world assets (RWA) on blockchain, emphasizing the importance of data standardization and trustworthiness. Key speakers from RWA World, Chronicle, Credora, and Synnax shared insights into their current practices and technologies for maintaining data accuracy and integrity. The discussions also covered the need for decentralized data sources, potential solutions for data privacy issues, and emerging technologies that could enhance the quality of RWA data on-chain. The importance of industry collaboration to establish standards was highlighted.
Highlights
- The importance of non-issuer-provided data was emphasized to avoid moral hazards.
- Significant gaps exist between blockchain technology and traditional banking systems.
- Encryption methods can address data privacy concerns.
- Emerging technologies like zero-knowledge proofs and secure computation can future-proof data integrity.
- Industry collaboration and self-regulation are crucial for standardization.
Outlines
- 02:13 – Greetings and setting up the discussion.
- 04:58 – Introduction of speakers and their backgrounds.
- 05:46 – Overview of Chronicle’s work with Ethereum and Oracles.
- 06:17 – Introduction to Credora’s credit assessment methodologies.
- 10:20 – Discussion on Polytrade’s approach and history with Credora.
- 13:03 – Main data needs for real-world assets and their categorization.
- 15:53 – Challenges of standardizing data across different platforms and assets.
- 21:19 – Synnax’s approach to data consistency and standardization.
- 27:06 – Discussion on data integrity and the challenges of diverse data sources.
- 31:11 – Considerations in privacy vs. transparency in data usage.
- 42:46 – Emerging technologies to enhance data trustworthiness.
- 53:35 – Closing remarks and emphasis on collaboration and self-regulation.
RWA Standardization and Data Trustworthiness
Introduction
The discussion featured representatives from RWA World, Chronicle, Credora, and Synnax, who explored the challenges and opportunities in tokenizing real-world assets (RWA) on the blockchain.
Data Needs for RWA
Participants emphasized the necessity of standardized data across various RWA categories. Matt from Credora elaborated on their credit assessment methodologies and the critical role of real-time data. Rob from Synnax discussed the importance of consistent data formats, which are essential for making machine learning models more effective.
Challenges of Data Accuracy and Trust
Nicholas from Chronicle highlighted the moral hazard involved when data is provided solely by the issuer, stressing the importance of independent verification. Tyler from RWA World addressed the gap between blockchain technology and traditional banking systems, which often still rely on outdated infrastructure. The panel underscored the importance of sourcing data from diverse and reliable origins to avoid situations similar to the FTX debacle.
Privacy vs. Transparency
The conversation also addressed the delicate balance between data privacy and transparency. Rob from Synnax recommended the use of encryption methods to safeguard data, while Matt from Credora emphasized the necessity of proper controls and processes to manage sensitive information effectively.
Emerging Technologies
The panel discussed emerging technologies, such as zero-knowledge proofs and secure computation, as tools to enhance data integrity and trustworthiness. There was consensus that continued decentralization would help mitigate biases and improve data reliability.
Closing Thoughts
The speakers collectively emphasized the importance of industry collaboration to establish standards and encourage innovation in the RWA space. They also highlighted the need for self-regulation to avoid the imposition of heavy regulatory burdens.
Transcript
Speaker 1: Shreya (Head of Marketing and Content at Polytrade, Host of the discussion). Speaker 2: Tyler (from RWA World). Speaker 3: Ray Buckton (Head of Research at RWA World). Speaker 4: Nicholas (from Chronicle). Speaker 5: PG (Founder and CEO of Polytrade). Speaker 6: Matt (Co-founder of Credora). Speaker 7: Rob (Co-founder and CEO of Synnax).
Speaker 1: Shreya
02:14 – 02:21
GM, everybody. Hello. How’s it going? Hello.
Speaker 2: Tyler
02:22 – 02:25
It’s going very well. Thanks so much for asking. How are you today?
Speaker 3: Ray Buckton
03:50 – 03:59
GM, GM. Going absolutely fantastic. Another beautiful day here in Tennessee and super stoked to spend it talking tokenization and sip some coffee with you guys.
Speaker 4: Nicholas
04:13 – 04:14
And GM, GM.
Speaker 5: PG
04:14 – 04:17
Guys, are you accelerating through other space?
Speaker 3: Ray Buckton
04:25 – 04:31
Absolutely. The weather is good, the coffee is good, and now we get to answer the question, is the data good? So that is the burning question.
Speaker 5: PG
04:32 – 04:41
For sure. I think Ray and Tyler are my favorite people to do this space with; there’s so much to learn in each session.
Speaker 3: Ray Buckton
04:45 – 04:59
The feeling is definitely mutual. It is an ever-growing industry, and that’s something we learn quickly getting involved. The scope of tokenization is consistently growing, with more technologies being developed. You guys are doing amazing work with your ERC standard. Awesome.
Speaker 1: Shreya
04:59 – 06:17
I think we have most of the speakers in here already. Awesome. Good morning, everyone! This is the third episode we are doing. This is a series where each week, on Tuesdays and Fridays, we debug fundamental problems in this space. We touch base on various topics, problems, and opportunities in the RWA space. Specifically, for this week, we are hosting this with RWA World, pioneers in the space themselves. It’s really lovely to have you guys co-hosting the space with us.
For speakers, we have Chronicle, Synnax, and Credora joining us today, along with PG, Su Chen, and Odd Mills from the Polytrade team. We’re really glad to have you guys today. I’ll give the stage to today’s speakers. We would love to know more about you and your protocols, and then we can deep dive into the problem we are discussing today, which is on-chain data.
Speaker 2: Tyler
06:18 – 06:53
Awesome. Well, maybe I’ll go first. So this is Tyler here behind the RWA World account. And I’m joined by our Head of Research, Ray Buckton. We are a real-world asset intelligence platform focused on bringing transparency and clarity to the world of real-world assets. We have a database of over 425 real-world asset companies available at rwa.world, and we publish a weekly newsletter. I’m pretty familiar with a lot of the names and faces up here on the stage, and I’m really excited and grateful for the opportunity to host this, co-host this with Polytrade.
Thanks so much for having us. I’m really looking forward to our discussion today. So maybe first, let’s pass it to Chronicle for a brief intro, and then we’ll go from there.
Speaker 4: Nicholas
06:56 – 08:03
Absolutely. Thank you. So, Chronicle is actually one of the first Oracles on Ethereum. Back then, we were called MakerDAO. So, when we were building DAI, we had to build out all of these primitives that we take for granted in DeFi today, like DEXs and Oracles. You know, even a debugger in Solidity was considered bleeding edge at the time. And so, when we launched DAI, that was essentially the first version of Chronicle. Over the years, we’ve just continued to develop it and perfect it to the point where now we’ve spun out from MakerDAO completely, we’ve productized the Oracle, and we’re now serving across 10 different chains. We’re serving huge clients, right?
So not just Maker, but also Morpho, Aave Protocol, M0, and Dolomite, and we’re pursuing the RWA space aggressively.
Speaker 2: Tyler
08:05 – 08:16
That’s so exciting. Thanks so much for being here. And I love that you mentioned being the first Oracle on Ethereum—really exciting to get into that. We also have folks from Credora here.
Speaker 6: Matt
08:18 – 08:22
Yeah, hey, how’s it going? Go ahead. So yeah, I’m Matt, I’m one of the co-founders of Credora.
Speaker 7: Rob
08:23 – 08:24
We’ve—
Speaker 6: Matt
08:24 – 10:19
—been providing credit assessments in the private credit space for about four years now. Generally, our approach incorporates a lot of methodological learnings from major ratings agencies. We also include real-time data in our credit assessment process and different risk factors, which we think are more pertinent to the private credit space as a whole. For example, one of our assessments is of information quality. We’ve provided detailed credit assessments on over 120 borrowers or different issuances now, where they’re generally segmented as corporate issuers, typically unsecured or different types of secured lending opportunities. And then, in terms of on-chain data, we recently started publishing our core credit metrics on-chain. So, depending on the type of issuance, those generally include a credit score, what we call a ratings agency equivalent—so, a mapping of that credit score to the more standardized scales that you would see in traditional finance—an implied probability of default, and then a borrow capacity.
We partner with a bunch of different protocols on-chain where we generally provide these credit assessments as a service. We’re particularly excited about their ability to use that on-chain credit information really at the smart contract level. We see a lot of friction in the processes today, and in secondary market trading of these private debt issuances. So we’re generally excited about the ability to consume that information, set parameters, and make other decisions at the smart contract level.
Speaker 2: Tyler
10:20 – 10:50
Amazing. Matt, thank you so much for being here. And I do see someone from the Synnax team in the audience. I’m sending you an invite to jump on the stage here. Unless I’m mistaken. Odd Mills, I don’t see anyone else from them. We’re also looking for T0 as well in the audience.
So, we’ll send them a speaker invite once we see them here in the audience, and then we’ll get going. Thanks, everyone, for your patience. Maybe I can kick it off with some quick intros as well from the rest of the Polytrade team, and then we’ll start with our first question. Maybe Odd Mills and PG, would you guys like to briefly overview Polytrade?
Speaker 5: PG
10:53 – 12:27
Sure. Thank you so much. Again, beautiful space, beautiful people. I love this space, particularly because, again, Credora—our relationship goes long back because we got rated earlier, and we were the first to launch a bond with one of our partners being rated by Credora. So yeah, fond memories. Very quickly, Polytrade today is in the RWA space for almost three years, aggregating assets, tokenized assets across chains, across protocols. We’re currently partnered with 50+ protocols across 11 chains. The list continuously keeps getting bigger and bigger every day—more chains, more protocols. The idea is very simple: make it an Amazon-like effect where people can just come and find a one-stop shop for buying tokenized assets.
And yes, the aim is to simplify the user journey. The aim is to bring Web2 users to Web3. And, you know, why should they come here? Because tokenization and its benefits are easy to understand for anybody. So Polytrade is continuously working on that side. And these spaces are particularly to highlight the issues in the industry and how we can work together to solve them. Myself, I’m the founder and CEO of Polytrade. I have Odd Mills here—he is Head of Strategy. I have Su Chen here, who is Chief Product Officer. We all three will be here. Thank you.
Speaker 2: Tyler
12:29 – 13:06
Well, awesome. I think that’s a great overview of Polytrade and the rest of the team here on the stage. And if you’re listening to this and you’re enjoying this, please retweet the space and go ahead and leave us any comments. And if you have any questions you want us to cover as we’re going through this, we’ll look
through them. But let’s get started with our first question. We’re talking about RWA data, and I want to set the table. What are the main data needs for real-world assets?
And how would we categorize these? Maybe Matt will start with you. Would you like to help us overview this?
Speaker 6: Matt
13:07 – 15:25
Yeah, sure. So maybe speaking from our perspective, we think credit assessments are important. Particularly in private credit, we think it’s a good distillation of the overall risks. We try to make the methodologies we use super transparent so that someone can understand the process we’ve gone through to analyze that risk. But, you know, in these private debt spaces, there’s sensitivity to sharing some of that underlying information with a wider audience. So, as a starting point, we think that distillation into credit scores is a good first step. To maybe segment it further, I think a lot of what we see today in the more generalized RWA space is validation of the underlying assets in some form, or relevant data in the context of different types of secured issuances, where maybe some of the data most relevant for a credit assessment isn’t as sensitive. So, in that sense, I’m maybe more talking about, for example, if you have a tokenized Treasury bill.
It’s really the validation of those assets at the underlying brokerage firm or if there are bank account assets that are really supporting that issuance. Then it’s a validation of those actual assets and broadcasting that on-chain. I think Chainlink, for example, has called it more of a proof-of-reserves exercise. The other sort of categories I have are market data. In the context of implementing the parameterization for debt issuance at the smart contract level, market data is pretty important. So, in the context of a credit assessment, those metrics being available can be taken into consideration, and then you can also potentially take broader, traditional market credit spreads into consideration to determine a general appropriate interest rate for a specific issuance. So, I can give you those three and open up the discussion to more.
Speaker 2: Tyler
15:27 – 15:52
That’s awesome. And then I also just heard from Polytrade that it looks like Rob from Synnax might be here. So Rob, if you want to send a speaker invite, we’ll bring you up here to jump in. Oh, is that Ray in RWA or is that—no, that’s not. Okay, cool. I’d also like to hear from Chronicle in the meantime as well. How would you answer this question?
You know, what are the main data needs that you’re seeing for RWAs and how would you categorize those?
Speaker 4: Nicholas
15:54 – 19:19
Yeah, so I think that’s quite diverse and it really depends on what particular asset type you’re talking about. From what has recently gotten traction in the RWA space, it’s primarily DAOs trying to get access to Treasury bill yields. That was kind of seen as the catalyst. You saw that executed at scale by players like Ondo, MakerDAO, and Superstate. In that particular context, you can say, okay, what do you want to know? Well, we want to know the custody of these assets—that they actually exist. So, if we’re going to lend you a hundred million, we want to know that you actually went and purchased a hundred million of T-bills.
So, what are the CUSIPs? What is the term? What is the yield? What is the distribution of turnover with those T-bills? Is it all concentrated on a particular date, or do we have a very uniform maturity distribution there? Those are the types of things that a credit engine protocol like Maker would be interested in having. As soon as you have that type of information on-chain, you can have a lot of protocol automation start kicking in. For example, let’s use Maker again. Instead of lending someone 200 million straight up, which can be risky, you can tranche it and start saying, okay, we’ll only lend you 10 million in one go, and then we want to see that 10 million cycle through and end up with 10 million worth of T-bills at the custodian. Only when that has been confirmed on-chain does the next tranche of 10 million unlock. So you’ve taken something that used to be a very manual process and augmented it with an automated process, but that’s backed by an actual risk-based metric, if you will.
But again, I’ll stop there. That’s a very particular example that pertains exclusively to T-bills. For some type of more advanced fixed-income products, you might do something different. If you’re trying to tokenize real estate, you might do something entirely different. So, the data dependency is very dependent on the asset type.
Speaker 5: PG
19:24 – 19:42
Hey, Nicholas, just one point here, if I may interrupt. Do you see that among T-bills themselves, with so many issues out in the market, there is a need for standardization of the whole process? Or do you think industry standards have been set?
Speaker 4: Nicholas
19:45 – 20:53
I don’t think there’s been any industry standards set, primarily because I don’t think there’s a cookie-cutter recipe for how to launch an RWA. There’s not even standardization around a jurisdiction or a particular legal structure. They are all quite different, which is a little bit frustrating from an Oracle integration point of view because it means that the integrations we do with customers have to be very specific, bespoke to their particular setup. I would argue we need much more standardization, and standardization would help across the board. It would make RWAs cheaper to deploy, have less overhead, make it easier to create oracles for them, and make on-chain credit delegation protocols much more comfortable as they have a standardized uniform risk framework around how to handle these.
Speaker 5: PG
20:56 – 21:14
Totally agree, well said. I think this will also allow Web2 users and institutions to not get confused with so much happening here. They would be able to come and embrace this whole beautiful technology much more easily. But yeah, that’s just my 2 cents. Go ahead.
Speaker 2: Tyler
21:20 – 21:45
We actually have Synnax joining us now as well. If you would like to briefly introduce yourself from the Synnax account, we were talking about one of our first questions. Maybe we’ll move to our next question, and you can answer this one after you do your intro. One of the things we’re wondering about is, as the volume of real-world assets grows on-chain, we anticipate a lot of challenges to maintaining data accuracy and trust. How are you guys addressing that at Synnax? And please do briefly introduce yourself as well. Welcome, glad to have you.
Speaker 7: Rob
21:46 – 21:47
Yeah, hi guys. Can you hear me?
Speaker 2: Tyler
21:49 – 21:49
Loud and clear.
Speaker 7: Rob
21:50 – 23:14
Great stuff. Hey guys, my name is Rob. I’m the Co-founder and CEO of Synnax. I’m also the Co-founder and former CEO of Clearpool, a DeFi lending protocol. Prior to that, my background is in traditional finance, mainly sales and trading in fixed income markets and tech capital markets. Synnax is a credit intelligence platform. It’s built for Web3 credit markets, but also as a bridge between Web2 and Web3 credit markets. Synnax introduces the concept of predictive credit intelligence.
We do this by providing predictions for the future financial performance of debt issuers. These predictions are derived through a unique decentralized machine learning consensus network, and we are also using secure computation technology to protect data privacy. Ultimately, this enables Synnax to provide credit insights that are unbiased, real-time, and forward-looking, while preserving the privacy of the issuer’s data. So yeah, thanks to Polytrade for inviting me today. Sorry, would you mind repeating the question?
Speaker 2: Tyler
23:15 – 23:40
Yeah, totally fine. And thank you so much for being here. This is going to be very relevant to the topic of RWA data, which is what we’re focusing on here in the Polytrade space. So the question is about the fact that we talked about data needs for RWAs, and we talked about categorizing those, but now we’re wondering about the volume of RWAs on-chain. They’re going to increase, right? And we anticipate challenges in maintaining data accuracy and trust. How would you guys address that problem set?
Speaker 7: Rob
23:42 – 26:46
Yeah, it’s a really good question. I mean, starting off a bit with the previous question, it really depends on what the asset is and if the asset was originated on-chain or off-chain. An example might be a loan that’s been originated on-chain. In this case, a lot of the data for that asset would already exist on-chain, so the additional data needs would mostly relate to the issuer and the use of funds, etc. On the other hand, you have assets that are originated off-chain. So, for this example, we could use a bond or a note that’s been tokenized.
In this case, there are a lot more data needs, relating not just to the issuer but also to the type of security, the prospectus of that security, authorized custodians, lead managers, and the legal structure of ownership, etc. So, all of this stuff that you probably covered in the previous question. If you’re not tackling that one, then yes, the challenges in bringing that data on-chain are going to
be in the consistency and standardization of those different types of data. You have this diversity of assets and regions to which those assets belong, which introduces different reporting standards and data points that are very hard to compare. So, the consistency and standardization of data are going to be a real challenge here. One way to address that, just to give one example, would be perhaps automated transformation into common international financial reporting standards. This could be one area that is currently a challenge already. For example, at Synnax, we parse a lot of data from regulatory sources for public companies.
The source of the data is good, but the challenge is that all of this data is in different formats and isn’t readable by machine learning, which is one of the core concepts of what we do. So transforming that data into a format that is usable by machine learning is a big challenge. And if you had that sort of standardization—automated transformation into common reporting standards—it would make life a lot easier. But having said that, that’s one of the unique selling points that we bring to the table. Yeah, hopefully, that answers the question.
Speaker 1: Shreya
26:50 – 27:47
Yeah, thanks for that, Rob. Great insight. I’m sorry, I was just cut off. There’s no Twitter Space if you don’t rug. So, thanks a lot. I love it. Can you take in that meanwhile, so I imagine the stakes are high when it comes to data integrity and the whole system relies on trust, right? So, when you think of standardization, it sounds great in theory, but I think the devil is in the details.
So, given the diversity of platforms and assets, can we realistically standardize the format and quality of data across different blockchains? If yes, how can we do so? Chronicle, maybe you want to go first?
Speaker 4: Nicholas
27:50 – 27:52
Give everyone else a chance to chime in.
Speaker 3: Ray Buckton
27:55 – 29:32
I’m happy to take a brief swing at it from RWA World’s side. Hello, everybody. I’m Ray Buckton, Head of Research at RWA World. We interact with a ton of different tokenization projects, ranging from everything in the collectibles market all the way up to tokenized securities and everything in between. As it pertains to standardization, formats, and quality, I think it’s definitely possible, but only within particular categories. For example, if you were to make a comparison between a tokenized collectible versus a tokenized security, you’d really be making an apples-to-oranges comparison, assuming that the collectible isn’t tokenized in a fund structure or something like that. Ultimately, the format of data is going to be a bit different for collectibles and physical items. We may want to know something about, of course, the fungibility between given collectibles. Because a Pokémon card that’s been ripped, torn, and played with is a lot different than a grade 10 near-mint, untouched card.
Whereas for securities, we may want to have some other formats and considerations pertaining to that data. But ultimately, regarding the second component of your question, quality, I think that’s something we can absolutely strive for. Everyone that we’ve heard from—Synnax, Chronicle, and Credora—is doing phenomenal work to that end because while there may be dissimilarities between RWA subcategories—securities, collectibles, etc.—the quality of data within those categories can definitely aim for the highest quality possible. So, I definitely think it’s possible to get the quality. The format is going to be an apples-to-oranges comparison.
Speaker 4: Nicholas
29:38 – 31:08
Yeah, I would agree. It’s very asset-dependent, right? And you kind of brought the collectibles angle into play. But even among securities, security is such a broad category that I’m not sure we’re going to center on a standard format that encompasses everything. I think, if anything, we’ll have standards within certain subcategories. But again, that standardization is direly needed because the more you make everything bespoke, the more expensive it gets. If we really want RWAs to scale, we need to make them as uniform as possible so you don’t end up having huge overhead costs from trying to do the deployment, from trying to do the integration, from trying to do some kind of risk analysis on them, like if you’re on the lending side. And yeah, I think one aspect of the question was also whether it changes by blockchain. No, I would say what blockchain you use has very little impact on this.
Speaker 7: Rob
31:11 – 32:22
Yeah, for what it’s worth, I would agree with Ray. It’s great to hear Ray giving that perspective for different types of assets. Completely agree with that. And also, on the blockchain or different blockchains, I don’t think it makes a big difference. Again, from a more traditional financial asset perspective, if you’re tokenizing something that exists in the real world, there’s a lot more data that needs to be considered. That changes a lot when you originate the asset on-chain. We’ve already seen some financial institutions issue bonds, for example, on-chain.
And I love that the information that is usually stored off-chain is now already on-chain, making the whole process much easier. And this challenge of standardization will go away. But when it comes to tokenizing assets that have been originated off-chain, as Nicholas said, it’s going to be a massive challenge to standardize all of that. I’m not sure we’ll ever see that.
Speaker 4: Nicholas
32:25 – 35:44
Yeah, I mean, to give a concrete example of this, we’re working with M0. For those who aren’t familiar, M0 is like if you took MakerDAO and Tether and went halfway in between both of those. With Tether, it’s a centralized issuer, but the collateral is off-chain. With Maker, it’s a decentralized issuer, and the collateral is on-chain. With M0, it’s still a decentralized issuer, but the collateral is held off-chain. So, in that sense, it’s very similar to an RWA in that you have issuers essentially locking up Treasury bills at some custodian, typically a bank. Then they have Chronicle validate what the custodian has in custody, and use that Chronicle-validated signature to go on-chain and mint M0 stablecoins against that attested value in custody. While I think this is extremely cool, you essentially don’t want to have one custodian holding all of this stuff, right?
You’re going to want a diversity of custodians, a diversity of jurisdictional exposure as well. I can tell you that having dealt with a few of these banks, there is very little uniformity on their back end. In the software world, we’re used to dealing with an API that you can query to get information. Most banks don’t have an API. You’re literally sometimes scraping a web page or something like that to get some information.
And you don’t even necessarily have the guarantees that information is the most up-to-date value. That information could be present on four or five different pages. Maybe 99% of the time, they have the same exact value. But maybe some of the time, one of the pages updates a little earlier than all the other ones. Or maybe another time, that one page is the one that updates later. So now they’re out of sync. Even something simple like what is the canonical value, what is the truth—their systems don’t even have that consistency.
So we’re trying to bring consistency to something that is not even consistent at the root, and then trying to scale this across multiple banks and doing this integration multiple times. You can quickly see how this problem is incredibly difficult to scale and standardize.
Speaker 2: Tyler
35:45 – 38:54
Yeah, I wanted to add to this really quickly because in my prior line of work for the last three years, I was at a fintech infrastructure bank before I started RWA World. I was working at Cross River Bank, and they were innovative in that they actually had a real-time bank core, which when I joined the bank, I didn’t understand why that was so revolutionary because I thought, well, tech companies everywhere have APIs, right? Why is this super innovative? But I found out that most banks are still using mainframes, and they’re coding in COBOL, and they’re using really old-fashioned, outdated technology. Just like you said, Nicholas, they’re so far behind. Now, there are not a ton of fintech infrastructure banks because the banking industry is trying to scare them away from crypto, unfortunately. Cross River didn’t have everything in terms of what it was able to accomplish because there was a lot of pressure to not work with the industry from the FDIC and a lot of other regulators. So, even if you do find one of these banks that has a real-time bank core, there are very few of them. There are only a few tech infrastructure banks that are usually generally crypto-friendly, like Cross River was, but you have this massive challenge where most banks are incapable of giving you that real-time notification with webhooks like Cross River was able to offer. It was such an interesting learning experience for me because I came into that industry and job thinking, okay, cool, this is a pretty standard offering. And I found out, oh my gosh, wait, this is one of the only banks that offers this. It was super mind-blowing to me because we’re so far into the crypto rabbit hole. Most of the folks on this call tend to forget how far behind the banking industry actually is. Just like Nicholas was pointing out, the team at Chronicle is having to figure out, okay, we’re literally having to scrape websites for certain data because these guys can’t make an API. It tells you just how far we
have to go still. So, maybe we can evolve this to our next question because this is extremely valuable intelligence that everybody here has been bringing to us in the space.
You know, we’ve talked a little bit about pulling this data from a lot of different sources. We’ve talked about the lack of a standard. One thing that we haven’t really touched on in too much depth is this idea of privacy versus transparency. Blockchains are open and permissionless, while off-chain systems are not. So, you have this option in terms of what data you’re actually bringing on-chain. I imagine that privacy and transparency have some consideration in that decision. Let’s open this up to anyone on the panel. What are some of your considerations when balancing the need for privacy and also wanting to be transparent?
Because obviously, as people are building smart contracts and apps on top of these data feeds, these oracles, these indexes, any material that we’re pulling off-chain and putting on-chain, there’s obviously a need for certain information that is discrete, binary, and executable in a deterministic fashion. But then there’s all this other squishy data, which maybe you don’t put on-chain. What do you guys make of that?
Speaker 7: Rob
38:56 – 42:44
Yeah, I can take this one from a credit perspective to begin with. These days, most debt issuers, most companies really don’t want to share data at all. It’s a huge concern for them, especially not up-to-date data. So, it is a problem because at the same time, these companies want to get a credit rating and access to credit. It’s great having the technology available to provide forward-looking insights, predictions on credit metrics, but you need the data first. You have to get them comfortable with sharing data. Unfortunately, they’re not going to do that in a plaintext version. So, we have to look at ways to mitigate that.
A couple of ways are through certain types of encryption. Homomorphic encryption, for example, would allow the issuer to encrypt their financial statements, and then the encrypted version of those statements can be shared with a decentralized network. The data would be encrypted. It’s not in plaintext, so the only entity that would be able to see the plaintext is the issuer themselves. But say the decentralized network of machine learning models built by data scientists—the data scientists can encrypt their algorithm using the public encryption key of the issuer.
That means the algorithm would then be able to perform computations on the data as if it were plaintext. The output from that is also encrypted and can then be decrypted by the issuer holding the private encryption key. Then we can see the result of that. That kind of explains a little bit about what Synnax does. This means the issuer can now be confident in sharing their data because nobody can actually read it, and we get meaningful outputs that, once decrypted, can be aggregated and form these predictions I mentioned earlier. This is one way.
Another way is by using zero-knowledge proofs. So, even though the data is coming directly from the issuer, so there’s no issue with provenance, there’s still the problem that the issuer could be lying or committing fraud. Zero-knowledge proofs would allow them to prove their assets or provide proof of equity, if you like. They can therefore attest that the information they provided in their financial statements is real and true. The other thing is just by using machine learning. It’s very astute at identifying trends and patterns and anomalies in data. If you have a time series of data, and then this company makes a false statement, it can get picked up by the models. This is perhaps something that a human analyst would miss. So, technology is there to solve this problem, and we’re really excited to be bringing that to the space.
Speaker 2: Tyler
42:47 – 42:48
It’s so vital.
Speaker 6: Matt
42:48 – 46:08
Yeah, I can jump on that. I can talk about it from our perspective. I think most of the data sensitivity issues we run into are in the context of real-time data. So, you know, a direct feed, for example, from someone’s exchange account or bank account, right? The inflows and outflows from their bank account. I think that’s sensitive for a good reason. Generally, the approach we take to that is to run a specific set of computations on that underlying data. Therefore, we can only access an aggregation.
So, that’s our approach. On the real-time side, I think people are relatively willing, especially if they’re trying to borrow, to share financial information as long as you have appropriate processes and controls around the management of that information. And when that information is shared, we typically ask, what is the source of that information and how do we encompass that in the credit assessment itself? Is it prepared by a third party? Is it audited? Is it a management account?
When it is a management account and it’s more timely data, does that company also have a history of audits? So, what reliance can we put on the accuracy of that underlying data? Is there a confidence interval we can apply to it? Then, our target—we don’t generally think about putting this raw underlying information on-chain. We think many legitimate capital allocators—serious funds or investors—care about that underlying information. Even in traditional markets, they won’t make a decision solely based on a credit assessment. So, in the on-chain world, when it comes to putting data on-chain, that credit assessment can be used at the smart contract level, as I said, to make decisions. It can effectively say, this is an appropriate interest rate, or this is an appropriate collateral requirement for this tranche of risk that this borrower falls into. But if the capital allocator is really evaluating that opportunity, they may justify accessing the underlying information. We do that off-chain. Mostly what we care about there is extracting the data and normalizing it efficiently. Then, supplementary to that, we provide traceability. There is some source document and a citation of that source document. We think that combination is good. It allows someone to go through a portfolio construction process that’s totally automated while giving larger capital allocators who need to do their due diligence directly some of that information off-chain. They can still leverage the efficiency of on-chain debt issuance platforms and the tokenization of the underlying asset.
Speaker 2: Tyler
46:09 – 47:04
Well said. I think as we’re looking towards the future now, I want to ask about these emerging technologies and methodologies that we’ve discussed here. Real quick though, if everyone’s hearing my voice, I want you to follow everybody here in the space. They’ve taken a lot of time out of their day to share some really valuable insights with us. Be sure to also follow the Polytrade account as well. Here in our last 10 minutes, I want to ask the panel, what are some emerging technologies that could further enhance the trustworthiness of this RWA data on-chain? We’ve talked about proof of reserve, we’ve talked about authentication, but what can we do to future-proof this data? Because we’re in an evolving space. Standards are being adopted and created.
And there are probably going to be competing standards and things like this. But what are the things we can do as an industry to collaborate together to try and future-proof and enhance the trustworthiness of RWA data that’s moving on-chain? Anyone can take that.
Speaker 4: Nicholas
47:08 – 49:02
Well, I think a lot of it starts by going to the actual source of the data. What you’ve typically seen called an RWA Oracle has usually been data obtained by the issuer. I think there is a substantial moral hazard at play if the issuer of an RWA is the same one providing you the data that you, as a credit delegator, are using to evaluate the credit profile of the RWA. It means choosing the more difficult path of going to those custodians, going to those banking systems that are held together by duct tape and bubblegum, and acknowledging that it’s going to take longer and be more expensive. You may actually have to connect to a mainframe and write some COBOL or Fortran. But I think what we really want to avoid is some kind of FTX-like situation. So, it behooves us as an industry to almost self-regulate. I think a market-based approach is pretty functional there, but we need to police ourselves lest the regulators come in after we’ve screwed things up and over-regulate, which would then be burdensome for the scaling of RWAs.
Speaker 2: Tyler
49:06 – 50:26
Oh man, I have to just jump in here. I love that answer. I completely agree about the moral hazard of relying only on data from tokenizers. Just to briefly touch on what RWA World is doing, we’re focusing on, at the moment, tokenized luxury watches. We’re working with tokenizers to get their data sources into our API, but we’re also going a step beyond and taking off-chain data sources. We’ll also be working with authenticators and custodians to bring their data as well. The idea there is to have multiple data sources that you can then bring into one location and build some sort of parameter-based assessment of that information, present it in a way that is executable, and you can actually build apps and other things on top of. But you cannot just rely on the information from the tokenizer because, like you said, there’s a potential challenge like an FTX-like situation where you can just say, look, yeah, we have this asset, it’s tokenized, and it’s worth X. Just trust me, bro. And that is something I anticipate being a major headline we will see in the future. Unfortunately, it’s something people who do not use strong Oracle sources and don’t use
proper data could fall victim to, because you can essentially meme your way into a market cap if it’s not backed by anything fundamental, but people think it is. So, I just want to echo that point you made.
Speaker 7: Rob
50:28 – 51:21
Yeah, I would probably agree with all of that. It’s the same problem of centralization, moving away from a situation where you have data coming from an issuer and a centralized institution analyzing that data. There’s so much potential for bias and manipulation within that model. So, continuing to bring more decentralization into the process will mitigate some of these issues and provide more unbiased and trustworthy outputs. A combination of the technologies I mentioned earlier and bringing more decentralization into the process are things we’ll see going forward that will improve the market.
Speaker 2: Tyler
51:25 – 51:30
Awesome. Nicholas, did you get a chance to answer this if you wanted to jump in as well?
Speaker 4: Nicholas
51:31 – 51:33
Yes, I spoke earlier.
Speaker 2: Tyler
51:34 – 51:37
Oh, sorry, Matt. I meant to say, I apologize.
Speaker 6: Matt
51:38 – 53:02
Yeah, I agree with the points on establishing high standards as early members of the space. I think that’s really important. I think even in the context of placing this data on-chain, it’s important to articulate the process through which that data makes its way on-chain and what the ultimate source of that information is. I think that can be modeled and made evident to the consumer. The only thing that is a specific metric. Standards come from self-regulatory bodies, or in certain cases, regulation. I think where there are more heavily regulated industries off-chain, you’ll see standards form more quickly in their on-chain counterparts. In terms of emerging technologies, we look at a lot of different proof schemes that we think are valuable in this context. We also really look into traceability and different database technologies, which we think can allow us to more effectively demonstrate traceability in end-to-end calculations, whether those are in the normalization of underlying data or in the application of a credit methodology. That’s it.
Speaker 2: Tyler
53:03 – 53:31
Well, I think it’s really exciting to hear all these different industry participants speaking openly about their challenges and opportunities in the space, as well as sharing collaboration ideas. That is the type of collaboration that Polytrade is trying to encourage and also what we’re all about at RWA World. I wanted to thank our guests for joining, especially Polytrade for allowing RWA World to come and co-host. PG or anyone else from the Polytrade team, if you want to jump in and offer some final closing thoughts, this was one of my favorite discussions we’ve had this week.
Speaker 5: PG
53:35 – 54:28
I can’t agree more. This was one of the best discussions. Since we all agree on some points—like the standardization of data across RWA verticals—I love what Chronicle, Nicholas, mentioned that each vertical probably has further subcategories that need standardization. These are some interesting points, and RWA World is doing a fantastic job in listing them down and creating a brilliant index for all of us to follow. I think more will come with the tools that people can use with this data. How can they build different systems? Polytrade will continue to keep asking these tough questions and bringing them to the audience as we go forward. Thanks again, everyone, for joining us. Any closing remarks from any speakers are more than welcome.
Thank you.
Speaker 7: Rob
54:31 – 54:41
No, thanks a lot, guys. Once again, for having me, it was a great discussion and really good to meet you all. Hopefully, I’ll be back again soon.
Speaker 1: Shreya
54:43 – 55:18
Absolutely. A special thanks to RWA World for co-hosting this discussion. We definitely appreciate it. And for everybody listening to us, we have a spaces code for today’s session. It’s on our portal, which is portal.polytrade.app. The spaces code is RWA World, which is dedicated to today’s discussion. Thanks a lot, everybody. We’ll see you soon this Friday.