This post will describe the main steps and operations that compose the cryptographic protocol of a re-encryption mixnet based voting system we are currently prototyping. This prototype is based around work^{[1][2]} by the E-Voting Group at the Bern University of Applied Sciences, and uses their unicrypt library. The main elements of the cryptographic scheme are

- Public, tamper resistant, access controlled bulletin board
- ElGamal cryptosystem
^{[6]} - Distributed public key generation
- Election public key vote encryption with proof of plaintext knowledge
- Authenticated vote casting (optionally with ciphertext signing) with cast as intended verifiability (Benaloh cast-or-cancel)
- Re-encryption mixnet using Terelius-Wikstrom
^{[4][5]}proof of shuffle - Joint decryption via partial decryption of ciphertexts
- Individual verifiability via hash checking on bulletin board (recorded as cast)
- Universal verifiability via execution of verifier against bulletin board (counted as recorded)

These have been listed to roughly correspond chronologically with the protocol phases. Here they are at a glance.

This example shows two authorities both for key generation/joint decryption and mixing, but the protocol generalizes to any number of authorities. Also note that although above the key generation/decryption authorities are the same as the mixing authorities, this need not be the case. One could have four authorities such that two of them were key custodians and two of them were mixers. It is standard practice, however, that the number of authorities of each type is the same. There would be no privacy gain having more of one type as the limiting factor would the smaller number.

#### Key generation

In the first step the key generation/decryption authorities jointly create the election public key. This is the key with which voters will encrypt their votes before casting them. As can be seen in the diagram, this process occurs in parallel at each authority. Furthermore the simple distributed key generation scheme does not require communication between the authorities (as would be the case for example with a threshold scheme such as joint-feldman/pedersen). Each authority creates its share of the key and posts the public fragment, along with proofs of correctness, at the bulletin board. The bulletin board then checks the proofs and combines the shares, resulting in the public key which is also posted. The purpose of a distributed key generation scheme is to distribute trust such that the privacy of the vote is safe as long as just one of the authorities is not corrupted. Note that the corresponding private key only exists conceptually as the combination of private information at each authority, it is never recreated.

#### Voting

Once the public key is generated and publicly available at the bulletin board the election can begin. When casting votes, user’s voting clients, in this case the voting booth hosted by the browser, download the public key from the bulletin board. Once they have made their selection, the voting client encrypts the ballot and produces a hash. Before casting, users are presented with the option to audit their ballots according to Benaloh’s cast-or-cancel procedure. This provides cast-as-intended verifiability. Finally, the ballot is cast and sent to the bulletin board. The bulletin board verifies the user’s eligibility in the election as well as the proofs of plaintext knowledge, and then posts the vote. The user records the hash corresponding to their vote to allow verifying that their ballot was actually stored and counted correctly.

#### Mixing

When the election period is over and all the votes have been recorded the mixing phase begins. The purpose of this phase is to anonymize the votes such that it is impossible to trace what ciphertext belongs to what voter. This is necessary to protect privacy, as the joint decryption phase will reveal vote contents to allow tallying. Just like in the key generation phase is trust distributed across several authorities, so too is the mixing. Each mixing authority permutes and re-encrypts the votes handling them as input to the next authority. Only if all of the mixing authorities are corrupt is it possible to establish the correspondence between the cast votes on the bulletin board and the output of the mixing phase, which will be decrypted. The prototype uses the Terelius-Wikstrom proof of shuffle, which is composed of an offline and online phase. Although the offline phase (permutation) can be precomputed prior the election start our prototype does not exploit this feature, opting for simplicity. However, what is exploited is the parallelism made possible by the fact that the offline only depends on the vote count. This allows all authorities to this phase simultaneously once the election period ends. This is in contrast to the online shuffle phase, where each authority must wait for the mixing results of the previous authority. The diagram above reflects this; we can see the computation bars overlap in time for the permutation but not the shuffle. Each authority submits the vote mixing results along with the proofs to the bulletin board. The bulletin board verifies the shuffle and posts the mixed votes for the next authority to mix. Once all the authorities have completed the mix and the bulletin board has verified all the proofs the mixing phase is over.

#### Decrypting

Having completed the mixing phase the bulleting board contains the set of anonymized votes for the election. Because they are anonymized these votes can now be decrypted without compromising privacy. Just as the key generation phase was distributed across several authorities, these same authorities must intervene to decrypt the votes. Since the scheme is distributed but not a threshold system, all the authorities must participate in joint decryption. Similarly, as long as one authority remains honest it is not possible to decrypt non-anonymized votes. To carry out the joint decryption each authority downloads the mixed votes from the bulletin board and calculates their partial decryptions using its private share of the key, along with corresponding proofs of correctness. As can be seen above this process is parallel and occurs simultaneously at all authorities once the mixing phase is finished. The partial decryptions and proofs are then posted to the bulletin board, which verifies the proofs. Once all partial decryptions are available the bulletin board combines them and subsequently obtains the plaintexts. As noted previously, the private key is never reconstructed, the combination occurs only for partial decryptions.

#### Tally and verification

The plaintexts are posted on the bulletin board. This completes the public data for the election, which we summarize below:

- Election public key shares and proofs of correctness (for each key authority)
- Election public key
- Cast votes, proofs of knowledge of plaintext, and signatures
- Votes mixes and proofs of shuffle (for each mix authority)
- Mixed votes partial decryptions and proofs (for each key authority)
- Combined partial decryption and plaintexts

With this information:

- The election result can be obtained by tallying plaintext votes
- Each voter can verify that their vote was recorded correctly
- Anyone can verify that the set of mixed votes corresponds to the recorded (cast) votes
- Anyone can verify that the plaintexts correspond to correct decryption of the mixed votes
- Anyone can verify that the election results corresponds to a correct tally of the plaintexts

The above properties, together with the ballot auditing procedure, make the prototype a secure^{[3]} end-to-end verifiable voting system.

References

[1] https://github.com/bfh-evg/univote2/raw/development/doc/report/report.pdf

[2] http://subs.emis.de/LNI/Proceedings/Proceedings232/article100.html

[3] By secure we mean specifically that it employs cryptography to support privacy and verifiability. No voting system is 100% secure in the general sense.

[4] B. Terelius and D. Wikstrom. Proofs of Restricted Shuffles. In D. J. Bernstein and ¨ T. Lange, editors, AFRICACRYPT’10, 3rd International Conference on Cryptology in Africa, LNCS 6055, pages 100–113, Stellenbosch, South Africa, 2010.

[5] D. Wikstrom. A Commitment-Consistent Proof of a Shuffle. In C. Boyd and J. Gonz ¨ alez ´ Nieto, editors, ACISP’09, 14th Australasian Conference on Information Security and Privacy, LNCS 5594, pages 407–421, Brisbane, Australia, 2009. [6] http://caislab.kaist.ac.kr/lecture/2010/spring/cs548/basic/B02.pdf