UCLA Olga Radko Endowed Math Circle

Student Research

  • In the Fall 2021, ORMC student Natalie Deering and ORMC instructor Nikita Gladkov did research together and solved a serious research problem on hat puzzles. They are writing a research paper containing their results, and they have given a guest lecture at the Math Circle explaining their results (Link to the video) (Permanent link). The following is the abstract:

Hat puzzles are a gem of recreational mathematics. Problems about prisoners forced to guess the color of their hats vary in difficulty significantly. After solving some puzzles on the easy side, we show the paradoxes arriving from the Axiom of Choice and present our original result showing that the celebrated Continuum Hypothesis is equivalent to a certain hat puzzle.

 

  • In the Summer 2021, ORMC students August Deer and Lydia Qin did research at USC under Prof. Avestimehr supported by the AEOP, ARO (award W911NF1810400), and NSF grant CCF-1763673. August ended up writing a paper, On Multi-Round Privacy in Federated Learning, published by the 2022 IEEE International Symposium on Information Theory (ISIT 2022). The following is the abstract:

Secure Aggregation is essential in federated learning to ensure that the server learns nothing about the local models of the users beyond their aggregate. Secure aggregation, however, does not protect the privacy of the users over multiple rounds of training due to partial user participation at each round. To quantify such multi-round privacy leakage, a new metric has
been introduced recently that requires that the server cannot reconstruct any individual model using the aggregate models from any number of training rounds. In addition, a privacy-preserving structured user selection strategy known as Multi-RoundSecAgg has been developed recently to ensure multi-round privacy while taking into account the convergence rate and the fairness of the user selection. Multi-RoundSecAgg, however, provides a trade-off between the multi-round privacy guarantee and the convergence rate in the sense that stronger multi-round privacy requires a larger number of training rounds. In this paper, we consider a weaker notion of multi-round privacy that still requires that the server cannot get any individual model. We show that considering this weaker notion allows for better convergence rates compared to Multi-RoundSecAgg while still protecting the privacy of the individual users in a weaker sense.