Most of us know that the color of our hair, or whether we have a chin dimple, or the likelihood of going bald when we get older has something to do with genetics, but other traits — being a morning person, having an aversion to cilantro or getting irritated at hearing someone chew their food — also have a genetic component.

23andMe recently added four new reports to its growing list of more than two-dozen Trait reports. Together, these reports help illustrate how our genetics may influence more than just physical traits like the thickness of our hair, but also food preferences or certain behavioral traits.

The reports also help us understand something else that many of us know already but for which we may still need a reminder — our genetics don’t explain everything.

Our traits are influenced by a complex interaction between genetics, environment and lifestyle. And the genetic influences can also be complicated with dozens and even hundreds, or sometimes thousands of genetic variants playing a role.

23andMe’s Trait Reports offer a fun way to explore this complexity by looking at how your DNA may influence physical features or certain personal characteristics. Spend a little time with each report and you quickly get a sense of how your genetics is just part of the bigger picture.

Let’s look at the four new Trait Reports we’ve added:

**Cilantro Taste Aversion —**Many people describe cilantro as tasting like soap so slipping a bit of the herb into a dish is seen as a culinary crime. 23andMe researchers have identified two genetic variants associated with aversion to cilantro in people of European ancestry. These two variants are near genes that control the olfactory receptors that determine a person’s sense of smell. Some of these receptors detect something called aldehydes, a compound found in soap, and a major component of the cilantro aroma.While having these variants might make it more likely that you dislike cilantro, it doesn’t guarantee it. The culture in which you grow up makes a difference too. A study from 2012 found distinct differences in aversion to cilantro among different populations. In cultures where the use of the herb was more common, among people with South Asian, Hispanic or Middle Eastern ancestry, fewer people dislike cilantro.

**Misophonia —**The report looks at the feelings of rage triggered by certain sounds. Misophonia, from the Greek word meaning hatred of sound, is characterized by an emotional reaction from rage to panic upon hearing other people chewing, sipping or chomping on their food. Some scientists believe that this could be an increased connection within the brain involved with hearing and the “fight or flight” response. 23andMe’s new report looks at a genetic variant near the TENM2 gene — involved in brain development — that is associated with having higher odds of misophonia.

**Wake-up Time**— Being a morning person or a night owl is partially associated with your genetics. 23andMe researchers looked at data from more than 70,000 customers who consented to participate in research and found 450 different genetic variants associated with being a morning person or a night owl. This new report uses genetic information and non-genetic information as part of a statistical model to predict what time you’d typically wake up on your days off when you don’t have to wake up for work. The report also allows you to adjust your genetics or your age to see how that might impact the prediction. Typically the older you are the earlier you wake up.

**Hair Thickness —**Genetics doesn’t just play a role in the color of your locks or whether or not you might go bald, it also influences the thickness of the hair you have. The size and shape of your hair follicles influence the thickness of each individual strand of hair you have. Typically people with East Asian ancestry have thicker hair than individuals with African or European ancestry. 23andMe’s new report looks at one genetic variant in the gene EDAR that is important in follicle development and plays a large role in the thickness of your hair. 23andMe actually offers six other reports that look at different hair traits, such early hair loss, the likelihood of developing a bald spot, hair texture, light or dark hair, red hair and having a widow’s peak.

23andMe’s new Trait Reports are just the latest in a series of new offerings for customers interested in exploring their genetics in a fun and inviting way.

The post 23andMe Adds Four New Trait Reports appeared first on 23andMe Blog.

*A novel cake-cutting puzzle reveals curiosities about numbers*

Alan Frank introduced the “Muffins Problem” nine years ago. Erich Friedman and Veit Elser found some early general results. Now Bill Gasarch along with John Dickerson of the University of Maryland have led a team of undergraduates and high-schoolers (Guangqi Cui, Naveen Durvasula, Erik Metz, Jacob Prinz, Naveen Raman, Daniel Smolyak, and Sung Hyun Yoo) in writing two new papers that develop a full theory.

Today we discuss the problem and how it plays a new game with integers.

The puzzle was popularized five years ago in the New York Times Online. That spoke of *cupcakes* not *muffins*, but Frank’s original term from 2009 has stuck to the pan since muffins are bigger and firmer hence easier to cut. The original question was:

How can one divide muffins among students while maximizing the size of the smallest piece?

Everyone will get of a muffin. If we cut a piece out of the first muffin and give someone else the piece left over then that person will also have a piece of size at most . That’s no better than the trivial solution of cutting each muffin into fifths. Can we do better?

In fact, we can. Quarter the first muffin—that at least is easy with a knife. With more care we divide the other two muffins into pieces of size , , and . Four people get a quarter and a piece giving . The fifth student gets the two pieces. So .

Is this optimal? If we divide any muffin into or more pieces, some piece must have size at most . On the other hand, if we divide each muffin into at most pieces, we have at most pieces total, so some student gets just piece. That must be a piece, leaving a piece which implies the need for an at-most piece either from halving it or from supplementing it. So we have proved .

Other cake-cutting problems involve protocols for “fair division” where one person cuts and another chooses. Here the division is constrained to be fair. The depth comes from the problem’s *minimax*—or *maximin*—nature. It is not a simple linear programming problem. It is not a two-player game but has game-like aspects. It does have an important *duality* property.

The flipped problem is to divide muffins among students. The trivial solution guarantees . We must have at least pieces so some student will get pieces, at least one of size no more than . We can achieve that by breaking four muffins into pieces of size and and the other in half. The other two students each get a piece and two pieces. This is a full proof of .

The dual nature of this argument may not be apparent at first but Friedman proved:

Theorem 1For all , .

*Proof:* Picture Sweeney Todd luring the students into his barbershop with muffins each proffered by a customer of Mrs. Lovett’s Meat Pies. So we have hungry muffin providers who will be served pieces of student pie. If a muffin was shared among students then its owner will get pieces of pie in return. The piece-maximization objective is the same as when the students ate the muffins. The only change is that the piece size is reckoned in proportion to the students rather than the muffins, hence the conversion factor .

The paper shows something more: how to convert a proof of optimality of a division in the primal to a proof for the corresponding division in the dual. Above we not only have but also the fifth student with the two pieces corresponds to the muffin divided into halves, the others with a and piece showing the division out of .

To illustrate another case, strikes me as easier to reason about than : Splitting each of muffins and giving one student four pieces, the others two and a , achieves . Conversely, each student gets an total share, so if someone gets a whole muffin then the remaining share causes a piece somewhere. If not, then there are total muffin pieces, so someone gets four pieces, and the smallest of those has size at most . So and hence .

One aspect of duality that seems missing, however, is the correspondence between a feasible solution on one side and a *constraint* on the other. For linear programming this placed it into long before Leonid Khachiyan placed it into . As was shown by Elser, the muffin problem yields a mixed linear and integer program. This is enough to show that is computable and always a rational number but so far not to place problems about and into let alone . Trying instead will show the issues.

The duality allows us to limit attention to . Since cases where divides are trivial, we have and not an integer. Then any solution achieving optimal minimum piece size must satisfy:

- Every student gets a share of at least two muffins.
- Every muffin is cut into at least two pieces.

The latter implies . Note that if divides then we get by halving each muffin, and vice-versa. So we also consider this a trivial case.

Not so easy to prove, apparently, is (given ). It appears as “Appendix E” of the group’s second paper. That and Bill’s talk slides for the 2018 Joint AMS-MAA Meeting have some updates over the ArXiv paper, even though the latter stretches to 199 pages.

Why is the paper so long? There are 103 pages of appendices and tables. These supplement an original effort to build a theory. It starts by defining , so that nontrivial cases have , and giving the following basic upper bounds:

*Proof:* In an optimal solution, every muffin must be cut into exactly two pieces, else we have . It follows that some students get shares from muffins and others partake in only muffins. The former receive some piece of size at most their total divided by , hence the first inequality. The latter similarly receive some piece of size at least , but then the other piece of the muffin it came from has size at most .

The ‘FC’ bounds are tight for , , , and , so one might expect it to continue for the whole Fibonacci sequence. But it *fails* for : instead of , this note posted by Bill using methods found by Metz gives an upper bound of Efforts to bound other progressions lead to theorems like this one:

Theorem 3If and then putting gives

They have been continually charting more individual solutions and also finding more arguments by which to generate upper and lower bound theorem cases. The efforts have been joined by other students. As we go to post the following bounds—ordered by and stated with common denominators—have yet to be closed:

The `?’ marks a computer run that timed out. The Muffin Team may soon solve some of these, but there are always more to do—unless and until a full characterization is found. This all shows scope for involvement by amateur mathematicians both for finding more-effective duality arguments and for computational experiments.

The following questions spring to mind—with and the same nontriviality assumptions as above:

- If in lowest terms, then is always a multiple of ?
- Is there always an optimal solution in which some student gets all equal-size pieces?
- If then does —i.e., do things reduce to identical sub-problems?
- Is given by a simple function of —or and by simple integer functions of and ?

A mark of subtlety is that the first two answers are *no* while the other two remain open problems despite all the work. The first holds whenever either bound in Theorem 2 is tight, or when equals an alternative bound called in the long paper. It fails, however, for . Although it and other known exceptions have , even that hasn’t been proved.

My thought with question 2 had been to force some relation between and . But Metz refuted it by showing that and that no solution gives someone shares of equal size. Here but is not a multiple of . I have posted his note here with their permission.

If an FC or INT bound is tight for then it is tight for for all integers . These bounds are defined in terms of alone and are polynomial-time computable. The team have formalized several other bounds with the same or similar properties. But next we discuss a sense in which the original FC bounds are the ultimate answers.

For any ideal generated by homogeneous polynomials of some degree in variables over some field , we can set to be the dimension of the quotient space of homogeneous polynomials of degree modulo the ideal . David Hilbert proved that there is always a polynomial such that for all but finitely many . Well, the minimum integer such that holds for all may be huge in terms of and , but Hilbert first proved it exists and later gave bounds which have since been refined. It is called the *Hilbert regularity*. The Muffin crew have proved a theorem that strikes me as somehow analogous:

Theorem 4For all there exists such that for all , equals one of the bounds in Theorem 2.

They also give a bound of roughly on . For they have computed exactly. One consequence of the regularity is that computing , while not known to be in or even in in any sense, belongs to the class of *fixed-parameter tractable* problems.

Their last main topic also bridges between Hilbert’s famous “Program” of automating mathematical deduction—the one supposedly destroyed by Kurt Gödel—and PolyMath projects. They have created a “Muffin Theorem Generator” for exceptional cases, and it is the subject of their second paper. They document its use to solve a sizable initial segment of exceptional cases having , and they have now resolved all for up through .

The high-level problem is to find a criterion that expresses the solution as a simple direct function of and . Or might there be irreducible complexity “underneath” the regularity bound as varies?

Short of a full characterization, what divisibility properties of integers are being used, in particular regarding and ? Their “Muffin Theorem Generator” also gives food for thought on computational experiments—and student research initiatives. Kudos to the students—note the newer bounds in the talk slides in particular.

Early on in the study of quantum computers, computer scientists posed a question whose answer, they knew, would reveal something deep about the power of these futuristic machines. Twenty-five years later, it’s been all but solved. In a paper posted online at the end of May, computer scientists Ran Raz and Avishay Tal provide strong evidence that quantum computers possess a computing capacity beyond anything classical computers could ever achieve.

Raz, a professor at Princeton University and the Weizmann Institute of Science, and Tal, a postdoctoral fellow at Stanford University, define a specific kind of computational problem. They prove, with a certain caveat, that quantum computers could handle the problem efficiently while traditional computers would bog down forever trying to solve it. Computer scientists have been looking for such a problem since 1993, when they first defined a class of problems known as “BQP,” which encompasses all problems that quantum computers can solve.

Since then, computer scientists have hoped to contrast BQP with a class of problems known as “PH,” which encompasses all the problems workable by any possible classical computer — even unfathomably advanced ones engineered by some future civilization. Making that contrast depended on finding a problem that could be proven to be in BQP but not in PH. And now, Raz and Tal have done it.

The result does not elevate quantum computers over classical computers in any practical sense. For one, theoretical computer scientists already knew that quantum computers can solve any problems that classical computers can. And engineers are still struggling to build a useful quantum machine. But Raz and Tal’s paper demonstrates that quantum and classical computers really are a category apart — that even in a world where classical computers succeed beyond all realistic dreams, quantum computers would still stand beyond them.

A basic task of theoretical computer science is to sort problems into complexity classes. A complexity class contains all problems that can be solved within a given resource budget, where the resource is something like time or memory.

Computer scientists have found an efficient algorithm, for example, for testing whether a number is prime. They have not, however, been able to find an efficient algorithm for identifying the prime factors of large numbers. Therefore, computer scientists believe (but have not been able to prove) that those two problems belong to different complexity classes.

The two most famous complexity classes are “P” and “NP.” P is all the problems that a classical computer can solve quickly. (“Is this number prime?” belongs to P.) NP is all the problems that classical computers can’t necessarily solve quickly, but for which they can quickly verify an answer if presented with one. (“What are its prime factors?” belongs to NP.) Computer scientists believe that P and NP are distinct classes, but actually proving that distinctness is the hardest and most important open problem in the field.

In 1993 computer scientists Ethan Bernstein and Umesh Vazirani defined a new complexity class that they called BQP, for “bounded-error quantum polynomial time.” They defined this class to contain all the decision problems — problems with a yes or no answer — that quantum computers can solve efficiently. Around the same time they also proved that quantum computers can solve all the problems that classical computers can solve. That is, BQP contains all the problems that are in P.

But they could not determine whether BQP contains problems not found in another important class of problems known as “PH,” which stands for “polynomial hierarchy.” PH is a generalization of NP. This means it contains all problems you get if you start with a problem in NP and make it more complex by layering qualifying statements like “there exists” and “for all.”^{1} Classical computers today can’t solve most of the problems in PH, but you can think of PH as the class of all problems classical computers could solve if P turned out to equal NP. In other words, to compare BQP and PH is to determine whether quantum computers have an advantage over classical computers that would survive even if classical computers could (unexpectedly) solve many more problems than they can today.

“PH is one of the most basic classical complexity classes there is,” said Scott Aaronson, a computer scientist at the University of Texas at Austin. “So we sort of want to know, where does quantum computing fit into the world of classical complexity theory?”

The best way to distinguish between two complexity classes is to find a problem that is provably in one and not the other. Yet due to a combination of fundamental and technical obstacles, finding such a problem has been a challenge.

If you want a problem that is in BQP but not in PH, you have to identify something that “by definition a classical computer could not even efficiently verify the answer, let alone find it,” said Aaronson. “That rules out a lot of the problems we think about in computer science.”

Here’s the problem. Imagine you have two random number generators, each producing a sequence of digits. The question for your computer is this: Are the two sequences completely independent from each other, or are they related in a hidden way (where one sequence is the “Fourier transform” of the other)? Aaronson introduced this “forrelation” problem in 2009 and proved that it belongs to BQP. That left the harder, second step — to prove that forrelation is not in PH.

Which is what Raz and Tal have done, in a particular sense. Their paper achieves what is called “oracle” (or “black box”) separation between BQP and PH. This is a common kind of result in computer science and one that researchers resort to when the thing they’d really like to prove is beyond their reach.

The actual best way to distinguish between complexity classes like BQP and PH is to measure the computational time required to solve a problem in each. But computer scientists “don’t have a very sophisticated understanding of, or ability to measure, actual computation time,” said Henry Yuen, a computer scientist at the University of Toronto.

So instead, computer scientists measure something else that they hope will provide insight into the computation times they can’t measure: They work out the number of times a computer needs to consult an “oracle” in order to come back with an answer. An oracle is like a hint-giver. You don’t know how it comes up with its hints, but you do know they’re reliable.

If your problem is to figure out whether two random number generators are secretly related, you can ask the oracle questions such as “What’s the sixth number from each generator?” Then you compare computational power based on the number of hints each type of computer needs to solve the problem. The computer that needs more hints is slower.

“In some sense we understand this model much better. It talks more about information than computation,” said Tal.

The new paper by Raz and Tal proves that a quantum computer needs far fewer hints than a classical computer to solve the forrelation problem. In fact, a quantum computer needs just one hint, while even with unlimited hints, there’s no algorithm in PH that can solve the problem. “This means there is a very efficient quantum algorithm that solves that problem,” said Raz. “But if you only consider classical algorithms, even if you go to very high classes of classical algorithms, they cannot.” This establishes that with an oracle, forrelation is a problem that is in BQP but not in PH.

Raz and Tal nearly achieved this result almost four years ago, but they couldn’t complete one step in their would-be proof. Then just a month ago, Tal heard a talk on a new paper on pseudorandom number generators and realized the techniques in that paper were just what he and Raz needed to finish their own. “This was the missing piece,” said Tal.

News of the separation between BQP and PH circulated quickly. “The quantum complexity world is a-rocking,” wrote Lance Fortnow, a computer scientist at Georgia Tech, the day after Raz and Tal posted their proof.

The work provides an ironclad assurance that quantum computers exist in a different computational realm than classical computers (at least relative to an oracle). Even in a world where P equals NP — one where the traveling salesman problem is as simple as finding a best-fit line on a spreadsheet — Raz and Tal’s proof demonstrates that there would still be problems only quantum computers could solve.

“Even if P were equal to NP, even making that strong assumption,” said Fortnow, “that’s not going to be enough to capture quantum computing.”

*Correction June 21, 2018: An earlier version of this article stated that the version of the traveling salesman problem that asks if a certain path is exactly the shortest distance is “likely” to be in PH. In fact, it has been proved to be in PH.*

Next Page of Stories