Whitepaper: Decentralized Encryption

(Disclaimer: This paper is an extremely old idea that has some outdated ideas and mechanisms of modern encryption. There are some ideas presented that are outright infeasible, yet the idea is still fun to ponder.)

The goal of encryption is for one user to send a message to another user without anyone else being able to understand the message.

The way we do encryption today is using the public key method. This involves two keys: the public key and the private key.

The user who is sending something has the public key, and the user who is receiving the data has the private key.

The first user takes the message he wishes to send and first sends it through the public key cipher. The cipher goes through the entire message changing it into an encrypted message. The important thing to mention is that this cipher is a special function because it is, at least for formal binary computers, one directional. This is what allows encryption to work. Given a newly encrypted message that went through the cipher, a ‘sniffer’ (or person who finds the encrypted message) cannot realistically use that public cipher to decrypt the message. It requires too much computing power and time to decrypt in this manner. For example it is simple to multiply a given number by an extremely large prime number, but when we take an extremely large number that is only divisible by a slightly less large prime number, current computers are not able to find that prime divisor in any realistic amount of time. This is why it is considered a one-directional operation, because you can plug in an input and get out an output, but you cannot solve for the input given the output. Now that we have this encrypted message we are able to send it however we want to the receiving user without anyone being able to read it. When the receiving user gets the message, he then uses his private key, which in our case contains the unique prime number, to easily decrypt the message. For a real life example, imagine a lock on a door. One key, the public key, can only turn the door counterclockwise, locking the door. Another key, the private key, can only turn the lock clockwise unlocking it. So you lock the door using the public key and unlock it using the private key. You need both keys to make full use of the lock.

 

This method is pretty much the only way we encrypt things today, and it is really effective, but there is a problem. The reason that we cannot decrypt using the public key is because there is not enough computing power and time to brute force the decryption, but there is a new method of computing that is being actively researched called quantum computing. I will not go into details about how quantum computing works, but we know that it gives us the computing power to brute force this decryption using the public keys. In fact, it shatters the seemingly ‘one-directional’ operations that we use to cipher things. In our example, it takes the public key and pushes in the clockwise direction so hard that the lock gives way and the door opens. Before quantum computing becomes viable, we need alternative ways of encrypting things that cannot be brute forced. This either involves creating better pseudo-one directional operations, or perhaps there are entirely new ways. The method I propose will not solve this major problem completely, but it can perhaps allow for encryption in smaller networks.

 

First of all, this method throws out the public and private keys entirely. Also, this method requires a perfectly (or nearly perfectly) decentralized network, in which every node or user on the network can see the data that is being transferred around. Nearly every user is involved whenever data is transferred between two users. Yet, we do not care about every user having all the information, in fact that would go against what we are doing, but we do want every user, or node, to be involved with the encryption system. This is because if the system can successfully keep data private when everyone is involved, it truly is secure to any other attack.

Before we go further, we need to understand neural networks as well. In reality, they are quite simple. They are a way to take an input, or multiple inputs, and put out an output, or multiple outputs. In this process there are many layers that the inputs go through. In each one of these layers there are many individual nodes (as in points in the network that manipulate information going through them). These nodes in a specific layer are connected in an extremely complex manner to many, if not all, the nodes in the layers that come before and after it (in a linear neural network). The inputs go through the nodes of one layer and are passed off as time goes on to other nodes in other layers. This organization of layers and nodes can be done in a manner of ways, either linearly, or non-linearly, or even multi-dimensionally. The nodes are the key to a neural network. Each node is some operation, it take the input from a previous node, or nodes, and outputs to another node, or nodes. Each node is a function among hundreds of others all working together to process the inputs. Neural networks are often used for machine learning.

Let’s say we have some data, where we know both the inputs and the expected outputs. We can put our inputs into some given neural network and see what outputs we get. Based on the accuracy of the outputs to our expected outputs (or the outputs of our training data), we methodically alter the nodes in our neural network until we eventually get a neural network that always gives the correct output based on a given input. This is valuable because once we have a ‘trained’ neural network we can put in inputs that we do not have the expected outputs for. Since our neural network was right for all the training data, we can trust that the output it gives is also right, to some degree of accuracy, for the new inputs. We created and trained a system that can now be applied to solving problems that it has never encountered before.

It is great, but it comes with many problems. When we use algorithms to solve problems, we know how the algorithm took the inputs and churned out the outputs. We understand every step of the algorithm since we made it, but this is not true with neural networks. These layers in the neural network were changed so much (through other algorithms designed to ‘fix’ the network), and so pseudo-randomly when we were training it, that we call these layers hidden layers. We call them this as they are hidden to our ability to analyze (for the most part). We can look at any node and see what it does, but we cannot easily understand what each individual node is doing  to create the larger picture. It is even harder to understand how the entire system is working together. Each node is part of a massive system that is almost impossible to understand how it works, all we know is that it does work with a given precision and accuracy. This ‘randomness’ is usually the downside because we cannot know how a given neural network works, but we can actually use this randomness to our advantage. Eventually we may reach a point where we can understand that entire neural network perfectly, but in the case where we are not able to due to the intrinsic nature of the system, we could use this complexity to our advantage. 

Let us try to create a neural network that takes an input and then outputs the same exact thing. It seems stupid, but if done correctly, this can be the first step to an encryption method that cannot be brute forced. Let us say the input is our sending user, and the output is our receiving user. When we send our input through the neural network it starts changing radically, looking nothing like the input. If you were to look at what the neural network has done to the input in the middle layer, what you would see would make no sense. After it gets past this critical point of “maximized jumbledness”, where it is completely different from the input, it starts to revert back to what the original input was and at the end you have the same input. What this means is that only the first couple layers and the nodes in them, and the last couple layers and nodes in them, have any useful information that resembles the original message (it is even possible that the first layer of the network changes the input so radically that we have many points of ‘maximized jumbledness’). Let us say that you got the data from the middle layer and you wanted to put it back through the neural network to get the input, to decrypt this meaningless information, you would have to have complete knowledge on the operation that each node does, which layer every node is in, and how each node is connected to every other node. This is a lot of information you need to know. In fact, you would have to have perfect knowledge of at least half the neural network to be able to take a given message from the network and decrypt it, if you took that message from the point of maximized jumbledness. If you do not have this information then there is no way to decrypt the information, it cannot be done through brute force without this knowledge. It cannot be done as you have no key, cipher, guide, or anything else to tell you where to start. It is not a one directional operation, but a maze of operations that you would have to understand how to navigate if you were dropped randomly inside. The only people that can fully know of the system are the senders and receivers (kind of, they do not have perfect knowledge unless they look, but they know where it starts and where it ends so they could see the entire network if they wished), anyone in the middle can only know what is connected to them.

There are many ways to utilize this neural network. It can be done between two computers or implemented in an entire system with some difficulty. One way to use it is as such, although the idea is unfinished and needs a lot of refining, is:

 

 

Taking a decentralized network, and two nodes that want to send each other information  securely, we assign every other node in the network a layer of the same-input-output neural network (we would want it to be multi-dimensional). We then can send information through the network utilizing this spread out neural network. The sender and receiver would get information that makes sense, but everyone else would see meaningless data. If every user in the network teamed up they could make sense of the information, but other than that the information would be secure.

 

There are a lot of other details that need to  be discussed, but as a starting point for discussion, I believe we have come to a adequate stopping point. If you have any questions, comments, or want to tell me why what I have wrote makes no sense please contact me (my email is in the ‘about section’ of the website).

Thanks!

 

 

Leave a Reply