1-dan master of the unyielding fist of Bayesian inference
3137 stories
·
0 followers

Raj, the Elm Architecture for JavaScript, releases v1.0!

1 Share
submitted by /u/chrisishereladies to r/javascript
[link] [comments]
Read the whole story
clumma
7 hours ago
reply
Berkeley, CA
Share this story
Delete

23andMe Adds Four New Trait Reports

1 Share

Most of us know that the color of our hair, or whether we have a chin dimple, or the likelihood of going  bald when we get older has something to do with genetics, but other traits — being a morning person, having an aversion to cilantro or getting irritated  at hearing someone chew their food — also have a genetic component.

23andMe recently added four new reports to its growing list of more than two-dozen Trait reports.  Together, these reports help illustrate how our genetics may influence more than just physical traits like the thickness of our hair, but also food preferences or certain behavioral traits.

The reports also help us understand something else that many of us know already but for which we may still need a reminder — our genetics don’t explain everything.

Our traits are influenced by a complex interaction between genetics, environment and lifestyle. And the genetic influences can also be complicated with dozens and even hundreds, or sometimes thousands of genetic variants playing a role.

23andMe’s Trait Reports offer a fun way to explore this complexity by looking at how your DNA may influence physical features or certain personal characteristics. Spend a little time with each report and you quickly get a sense of how your genetics is just part of the bigger picture.

Let’s look at the four new Trait Reports we’ve added:

  • Cilantro Taste Aversion — Many people describe cilantro as tasting like soap so slipping a bit of the  herb into a dish is seen as a culinary crime. 23andMe researchers have identified two genetic variants associated with aversion to cilantro in people of European ancestry. These two variants are near genes that control the olfactory receptors that determine a person’s sense of smell. Some of these receptors detect something called aldehydes, a compound found in soap, and a major component of the cilantro aroma.While having these variants might make it more likely that you dislike cilantro, it doesn’t guarantee it. The culture in which you grow up makes a difference too. A study from 2012 found distinct differences in aversion to cilantro among different populations. In cultures where the use of the herb was more common, among people with South Asian, Hispanic or Middle Eastern ancestry, fewer people dislike cilantro.

  • Misophonia — The report looks at the feelings of rage triggered by certain sounds. Misophonia, from the Greek word meaning hatred of sound, is characterized by an emotional reaction from rage to panic upon hearing other people chewing, sipping or chomping on their food. Some scientists believe that this could be an increased connection within the brain involved with hearing and the “fight or flight” response. 23andMe’s new report looks at a genetic variant near the TENM2 gene — involved in brain development — that is associated with having higher odds of misophonia.

 

  • Wake-up Time — Being a morning person or a night owl is partially associated with your genetics. 23andMe researchers looked at data from more than 70,000 customers who consented to participate in research and found 450 different genetic variants associated with being a morning person or a night owl. This new report uses genetic information and non-genetic information as part of a statistical model to predict what time you’d typically wake up on your days off when you don’t have to wake up for work. The report also allows you to adjust your genetics or your age to see how that might impact the prediction. Typically the older you are the earlier you wake up.

  • Hair Thickness — Genetics doesn’t just play a role in the color of your locks or whether or not you might go bald, it also influences the thickness of the hair you have. The size and shape of your hair follicles influence the thickness of each individual strand of hair you have. Typically people with East Asian ancestry have thicker hair than individuals with African or European ancestry. 23andMe’s new report looks at one genetic variant in the gene EDAR that is important in follicle development and plays a large role in the thickness of your hair. 23andMe actually offers six other reports that look at different hair traits, such early hair loss, the likelihood of developing a bald spot, hair texture, light or dark hair, red hair and having a widow’s peak.

23andMe’s new Trait Reports are just the latest in a series of new offerings for customers interested in exploring their genetics in a fun and inviting way.

 

The post 23andMe Adds Four New Trait Reports appeared first on 23andMe Blog.

Read the whole story
clumma
12 hours ago
reply
Berkeley, CA
Share this story
Delete

Muffins and Integers

1 Share


A novel cake-cutting puzzle reveals curiosities about numbers

Alan Frank introduced the “Muffins Problem” nine years ago. Erich Friedman and Veit Elser found some early general results. Now Bill Gasarch along with John Dickerson of the University of Maryland have led a team of undergraduates and high-schoolers (Guangqi Cui, Naveen Durvasula, Erik Metz, Jacob Prinz, Naveen Raman, Daniel Smolyak, and Sung Hyun Yoo) in writing two new papers that develop a full theory.

Today we discuss the problem and how it plays a new game with integers.

The puzzle was popularized five years ago in the New York Times Online. That spoke of cupcakes not muffins, but Frank’s original term from 2009 has stuck to the pan since muffins are bigger and firmer hence easier to cut. The original question was:

How can one divide {m = 3} muffins among {s = 5} students while maximizing the size {f(m,s)} of the smallest piece?

Everyone will get {3/5} of a muffin. If we cut a {3/5} piece out of the first muffin and give someone else the {2/5} piece left over then that person will also have a piece of size at most {1/5}. That’s no better than the trivial solution of cutting each muffin into fifths. Can we do better?

A Kinder Cut

In fact, we can. Quarter the first muffin—that at least is easy with a knife. With more care we divide the other two muffins into pieces of size {7/20}, {7/20}, and {6/20 = 3/10}. Four people get a quarter and a {7/20} piece giving {(5+7)/20 = 3/5}. The fifth student gets the two {3/10} pieces. So {f(3,5) \geq 1/4}.

Is this optimal? If we divide any muffin into {4} or more pieces, some piece must have size at most {1/4}. On the other hand, if we divide each muffin into at most {3} pieces, we have at most {9} pieces total, so some student gets just {1} piece. That must be a {3/5} piece, leaving a {2/5} piece which implies the need for an at-most {1/5} piece either from halving it or from supplementing it. So we have proved {f(3,5) = 1/4}.

Other cake-cutting problems involve protocols for “fair division” where one person cuts and another chooses. Here the division is constrained to be fair. The depth comes from the problem’s minimax—or maximin—nature. It is not a simple linear programming problem. It is not a two-player game but has game-like aspects. It does have an important duality property.

Duality

The flipped problem is to divide {5} muffins among {3} students. The trivial solution guarantees {f(5,3) \geq 1/3}. We must have at least {10} pieces so some student will get {4} pieces, at least one of size no more than {5/(3*4) = 5/12}. We can achieve that by breaking four muffins into pieces of size {5/12} and {7/12} and the other in half. The other two students each get a {6/12} piece and two {7/12} pieces. This is a full proof of {f(5,3) = 5/12}.

The dual nature of this argument may not be apparent at first but Friedman proved:

Theorem 1 For all {m,s \in \mathbb{N^+}}, {f(s,m) = \frac{s}{m}f(m,s)}.

Proof: Picture Sweeney Todd luring the {s} students into his barbershop with {m} muffins each proffered by a customer of Mrs. Lovett’s Meat Pies. So we have {m} hungry muffin providers who will be served pieces of student pie. If a muffin was shared among {k} students then its owner will get {k} pieces of pie in return. The piece-maximization objective is the same as when the students ate the muffins. The only change is that the piece size is reckoned in proportion to the students rather than the muffins, hence the conversion factor {\frac{s}{m}}. \Box

The paper shows something more: how to convert a proof of optimality of a division in the primal to a proof for the corresponding division in the dual. Above we not only have {f(5,3) = \frac{5}{3}\cdot \frac{1}{4} = \frac{5}{12}} but also the fifth student with the two {3/10} pieces corresponds to the muffin divided into halves, the others with a {1/4 = 5/20} and {7/20} piece showing the {5:7} division out of {\frac{3}{5}\cdot 20 = 12}.

To illustrate another case, {f(8,5)} strikes me as easier to reason about than {f(5,8)}: Splitting each of {8} muffins {\frac{2}{5}:\frac{3}{5}} and giving one student four {\frac{2}{5}} pieces, the others two {\frac{3}{5}} and a {\frac{2}{5}}, achieves {2/5}. Conversely, each student gets an {8/5} total share, so if someone gets a whole muffin then the remaining {3/5} share causes a {\frac{2}{5}} piece somewhere. If not, then there are {16} total muffin pieces, so someone gets four pieces, and the smallest of those has size at most {\frac{2}{5}}. So {f(8,5) = 2/5} and hence {f(5,8) = 1/4}.

One aspect of duality that seems missing, however, is the correspondence between a feasible solution on one side and a constraint on the other. For linear programming this placed it into {\mathsf{NP \cap co}\text{-}\mathsf{NP}} long before Leonid Khachiyan placed it into {\mathsf{P}}. As was shown by Elser, the muffin problem yields a mixed linear and integer program. This is enough to show that {f(m,s)} is computable and always a rational number {q/r} but so far not to place problems about {q} and {r} into {\mathsf{NP}} let alone {\mathsf{P}}. Trying {f(11,5)} instead will show the issues.

Discovering, Charting, and Theory-Building

The duality allows us to limit attention to {m \geq s}. Since cases where {s} divides {m} are trivial, we have {m \geq s+1} and {m/s} not an integer. Then any solution achieving optimal minimum piece size {k = f(m,s)} must satisfy:

  • Every student gets a share of at least two muffins.

  • Every muffin is cut into at least two pieces.

The latter implies {f(m,s) \leq 1/2}. Note that if {s} divides {2m} then we get {f(m,s) = 1/2} by halving each muffin, and vice-versa. So we also consider this a trivial case.

Not so easy to prove, apparently, is {f(m,s) \geq 1/3} (given {m \geq s}). It appears as “Appendix E” of the group’s second paper. That and Bill’s talk slides for the 2018 Joint AMS-MAA Meeting have some updates over the ArXiv paper, even though the latter stretches to 199 pages.

Why is the paper so long? There are 103 pages of appendices and tables. These supplement an original effort to build a theory. It starts by defining {\ell = \lfloor 2m/s \rfloor}, so that nontrivial cases have {\ell < \frac{2m}{s} < \ell+1}, and giving the following basic upper bounds:

Theorem 2 For {m \geq s}, {f(m,s)} is at most the minimum of {\frac{m}{s(\ell+1)}} and {1 - \frac{m}{s\ell}} .

Proof: In an optimal solution, every muffin must be cut into exactly two pieces, else we have {f(m,s) \leq 1/3}. It follows that some students get shares from {(\ell+1)} muffins and others partake in only {\ell} muffins. The former receive some piece of size at most their {\frac{m}{s}} total divided by {(\ell+1)}, hence the first inequality. The latter similarly receive some piece of size at least {\frac{m}{s\ell}}, but then the other piece of the muffin it came from has size at most {1 - \frac{m}{s\ell}}. \Box

The ‘FC’ bounds are tight for {f(8,5) = 8/20 = 2/5}, {f(13,8) = 13/32}, {f(21,13) = 21/52}, and {f(34,21) = 34/84 = 17/42}, so one might expect it to continue for the whole Fibonacci sequence. But it fails for {f(55,34)}: instead of {55/136 = 0.4044...}, this note posted by Bill using methods found by Metz gives an upper bound of {151/374 = 0.4037...} Efforts to bound other progressions lead to theorems like this one:

Theorem 3 If {\frac{5d}{13} \leq a \leq \frac{13d}{29}} and {a \neq \frac{2}{5}d} then putting {X = \max\{\frac{5a-d}{6}, \frac{a+d}{8}, \frac{3a}{7}\}} gives

\displaystyle  f(3dk + a + d, 3dk + a) \leq \frac{dk+X}{3dk+a}.

They have been continually charting more individual solutions and also finding more arguments by which to generate upper and lower bound theorem cases. The efforts have been joined by other students. As we go to post the following bounds—ordered by {s} and stated with common denominators—have yet to be closed:

\displaystyle  \begin{array}{ccccc} 2009/4410 &\leq &f(67,21) &\leq &2010/4410\\ 669/1500 &\leq &f(67,25) &\leq &670/1500\\ ? &\leq &f(61,27) &\leq &281/648\\ 620/1440 &\leq &f(69,32) &\leq &621/1440\\ 273/666 &\leq &f(62,37) &\leq &274/666\\ 591/1440 &\leq &f(67,40) &\leq &592/1440\\ 325/820 &\leq &f(70,41) &\leq &328/820\\ 139/344 &\leq &f(70,43) &\leq &140/344\\ 912/2632 &\leq &f(61,47) &\leq &917/2632\\ 152/441 &\leq &f(64,49) &\leq &153/441\\ 355/1000 &\leq &f(63,50) &\leq &356/1000\\ 209/612 &\leq &f(67,51) &\leq &210/612\\ 110/318 &\leq &f(69,53) &\leq &111/318\\ 552/1540 &\leq &f(67,55) &\leq &553/1540 \end{array}

The `?’ marks a computer run that timed out. The Muffin Team may soon solve some of these, but there are always more to do—unless and until a full characterization is found. This all shows scope for involvement by amateur mathematicians both for finding more-effective duality arguments and for computational experiments.

Higher-Level Questions and Subtleties

The following questions spring to mind—with {m \geq s} and the same nontriviality assumptions as above:

  1. If {f(m,s) = q/r} in lowest terms, then is {r} always a multiple of {s}?

  2. Is there always an optimal solution in which some student gets all equal-size pieces?

  3. If {\gcd(m,s) = a} then does {f(m,s) = f(\frac{m}{a},\frac{s}{a})}—i.e., do things reduce to {a} identical sub-problems?

  4. Is {q/r} given by a simple function of {m/s}—or {q} and {r} by simple integer functions of {m} and {s}?

A mark of subtlety is that the first two answers are no while the other two remain open problems despite all the work. The first holds whenever either bound in Theorem 2 is tight, or when {f(m,s)} equals an alternative bound called {\mathrm{INT}(m,s)} in the long paper. It fails, however, for {f(31,14) = 3/7}. Although it and other known exceptions have {\gcd(r,s) > 1}, even that hasn’t been proved.

My thought with question 2 had been to force some relation between {r} and {s\ell}. But Metz refuted it by showing that {f(35,13) = 64/143} and that no solution gives someone shares of equal size. Here {\ell = \lfloor 70/13\rfloor = 5} but {143 = 11\cdot 13} is not a multiple of {65}. I have posted his note here with their permission.

If an FC or INT bound is tight for {f(m,s)} then it is tight for {f(am,as)} for all integers {a \geq 1}. These bounds are defined in terms of {\frac{m}{s}} alone and are polynomial-time computable. The team have formalized several other bounds with the same or similar properties. But next we discuss a sense in which the original FC bounds are the ultimate answers.

Touches of Hilbert?

For any ideal {I} generated by homogeneous polynomials of some degree {d_0} in variables {x_1,\dots,x_n} over some field {K}, we can set {h_M(d)} to be the dimension of the quotient space {M_d} of homogeneous polynomials of degree {d \geq d_0} modulo the ideal {I}. David Hilbert proved that there is always a polynomial {p_M} such that {h_M(d) = p_M(d)} for all but finitely many {d}. Well, the minimum integer {D_0} such that {h_M(d) = p_M(d)} holds for all {d \geq D_0} may be huge in terms of {d_0} and {n}, but Hilbert first proved it exists and later gave bounds which have since been refined. It is called the Hilbert regularity. The Muffin crew have proved a theorem that strikes me as somehow analogous:

Theorem 4 For all {s > 0} there exists {M_s > 0} such that for all {m \geq M_s}, {f(m,s)} equals one of the bounds in Theorem 2.

They also give a bound of roughly {s^3} on {M_s}. For {s \leq 7} they have computed {M_s} exactly. One consequence of the regularity is that computing {f(m,s)}, while not known to be in {\mathsf{P}} or even in {\mathsf{NP}} in any sense, belongs to the class {\mathsf{FPT}} of fixed-parameter tractable problems.

Their last main topic also bridges between Hilbert’s famous “Program” of automating mathematical deduction—the one supposedly destroyed by Kurt Gödel—and PolyMath projects. They have created a “Muffin Theorem Generator” for exceptional cases, and it is the subject of their second paper. They document its use to solve a sizable initial segment of exceptional cases having {s > 7}, and they have now resolved all for {s} up through {9}.

Open Problems

The high-level problem is to find a criterion that expresses the solution {q/r} as a simple direct function of {m} and {s}. Or might there be irreducible complexity “underneath” the regularity bound {M_s} as {s} varies?

Short of a full characterization, what divisibility properties of integers are being used, in particular regarding {r} and {s}? Their “Muffin Theorem Generator” also gives food for thought on computational experiments—and student research initiatives. Kudos to the students—note the newer bounds in the talk slides in particular.





Read the whole story
clumma
12 hours ago
reply
Berkeley, CA
Share this story
Delete

Finally, a Problem That Only Quantum Computers Will Ever Be Able to Solve

1 Share

Early on in the study of quantum computers, computer scientists posed a question whose answer, they knew, would reveal something deep about the power of these futuristic machines. Twenty-five years later, it’s been all but solved. In a paper posted online at the end of May, computer scientists Ran Raz and Avishay Tal provide strong evidence that quantum computers possess a computing capacity beyond anything classical computers could ever achieve.

Raz, a professor at Princeton University and the Weizmann Institute of Science, and Tal, a postdoctoral fellow at Stanford University, define a specific kind of computational problem. They prove, with a certain caveat, that quantum computers could handle the problem efficiently while traditional computers would bog down forever trying to solve it. Computer scientists have been looking for such a problem since 1993, when they first defined a class of problems known as “BQP,” which encompasses all problems that quantum computers can solve.

Since then, computer scientists have hoped to contrast BQP with a class of problems known as “PH,” which encompasses all the problems workable by any possible classical computer — even unfathomably advanced ones engineered by some future civilization. Making that contrast depended on finding a problem that could be proven to be in BQP but not in PH. And now, Raz and Tal have done it.

The result does not elevate quantum computers over classical computers in any practical sense. For one, theoretical computer scientists already knew that quantum computers can solve any problems that classical computers can. And engineers are still struggling to build a useful quantum machine. But Raz and Tal’s paper demonstrates that quantum and classical computers really are a category apart — that even in a world where classical computers succeed beyond all realistic dreams, quantum computers would still stand beyond them.

Quantum Classes

A basic task of theoretical computer science is to sort problems into complexity classes. A complexity class contains all problems that can be solved within a given resource budget, where the resource is something like time or memory.

Computer scientists have found an efficient algorithm, for example, for testing whether a number is prime. They have not, however, been able to find an efficient algorithm for identifying the prime factors of large numbers. Therefore, computer scientists believe (but have not been able to prove) that those two problems belong to different complexity classes.

The two most famous complexity classes are “P” and “NP.” P is all the problems that a classical computer can solve quickly. (“Is this number prime?” belongs to P.) NP is all the problems that classical computers can’t necessarily solve quickly, but for which they can quickly verify an answer if presented with one. (“What are its prime factors?” belongs to NP.) Computer scientists believe that P and NP are distinct classes, but actually proving that distinctness is the hardest and most important open problem in the field.

In 1993 computer scientists Ethan Bernstein and Umesh Vazirani defined a new complexity class that they called BQP, for “bounded-error quantum polynomial time.” They defined this class to contain all the decision problems — problems with a yes or no answer — that quantum computers can solve efficiently. Around the same time they also proved that quantum computers can solve all the problems that classical computers can solve. That is, BQP contains all the problems that are in P.

But they could not determine whether BQP contains problems not found in another important class of problems known as “PH,” which stands for “polynomial hierarchy.” PH is a generalization of NP. This means it contains all problems you get if you start with a problem in NP and make it more complex by layering qualifying statements like “there exists” and “for all.”1 Classical computers today can’t solve most of the problems in PH, but you can think of PH as the class of all problems classical computers could solve if P turned out to equal NP. In other words, to compare BQP and PH is to determine whether quantum computers have an advantage over classical computers that would survive even if classical computers could (unexpectedly) solve many more problems than they can today.

“PH is one of the most basic classical complexity classes there is,” said Scott Aaronson, a computer scientist at the University of Texas at Austin. “So we sort of want to know, where does quantum computing fit into the world of classical complexity theory?”

The best way to distinguish between two complexity classes is to find a problem that is provably in one and not the other. Yet due to a combination of fundamental and technical obstacles, finding such a problem has been a challenge.

If you want a problem that is in BQP but not in PH, you have to identify something that “by definition a classical computer could not even efficiently verify the answer, let alone find it,” said Aaronson. “That rules out a lot of the problems we think about in computer science.”

Ask the Oracle

Here’s the problem. Imagine you have two random number generators, each producing a sequence of digits. The question for your computer is this: Are the two sequences completely independent from each other, or are they related in a hidden way (where one sequence is the “Fourier transform” of the other)? Aaronson introduced this “forrelation” problem in 2009 and proved that it belongs to BQP. That left the harder, second step — to prove that forrelation is not in PH.

Which is what Raz and Tal have done, in a particular sense. Their paper achieves what is called “oracle” (or “black box”) separation between BQP and PH. This is a common kind of result in computer science and one that researchers resort to when the thing they’d really like to prove is beyond their reach.

The actual best way to distinguish between complexity classes like BQP and PH is to measure the computational time required to solve a problem in each. But computer scientists “don’t have a very sophisticated understanding of, or ability to measure, actual computation time,” said Henry Yuen, a computer scientist at the University of Toronto.

So instead, computer scientists measure something else that they hope will provide insight into the computation times they can’t measure: They work out the number of times a computer needs to consult an “oracle” in order to come back with an answer. An oracle is like a hint-giver. You don’t know how it comes up with its hints, but you do know they’re reliable.

If your problem is to figure out whether two random number generators are secretly related, you can ask the oracle questions such as “What’s the sixth number from each generator?” Then you compare computational power based on the number of hints each type of computer needs to solve the problem. The computer that needs more hints is slower.

“In some sense we understand this model much better. It talks more about information than computation,” said Tal.

The new paper by Raz and Tal proves that a quantum computer needs far fewer hints than a classical computer to solve the forrelation problem. In fact, a quantum computer needs just one hint, while even with unlimited hints, there’s no algorithm in PH that can solve the problem. “This means there is a very efficient quantum algorithm that solves that problem,” said Raz. “But if you only consider classical algorithms, even if you go to very high classes of classical algorithms, they cannot.” This establishes that with an oracle, forrelation is a problem that is in BQP but not in PH.

Raz and Tal nearly achieved this result almost four years ago, but they couldn’t complete one step in their would-be proof. Then just a month ago, Tal heard a talk on a new paper on pseudorandom number generators and realized the techniques in that paper were just what he and Raz needed to finish their own. “This was the missing piece,” said Tal.

News of the separation between BQP and PH circulated quickly. “The quantum complexity world is a-rocking,” wrote Lance Fortnow, a computer scientist at Georgia Tech, the day after Raz and Tal posted their proof.

The work provides an ironclad assurance that quantum computers exist in a different computational realm than classical computers (at least relative to an oracle). Even in a world where P equals NP — one where the traveling salesman problem is as simple as finding a best-fit line on a spreadsheet — Raz and Tal’s proof demonstrates that there would still be problems only quantum computers could solve.

“Even if P were equal to NP, even making that strong assumption,” said Fortnow, “that’s not going to be enough to capture quantum computing.”

Correction June 21, 2018: An earlier version of this article stated that the version of the traveling salesman problem that asks if a certain path is exactly the shortest distance is “likely” to be in PH. In fact, it has been proved to be in PH.



Read the whole story
clumma
12 hours ago
reply
Berkeley, CA
Share this story
Delete

SOUTH DAKOTA v. WAYFAIR, INC., ET AL.. Decided 06/21/2018

1 Share
Read the whole story
clumma
17 hours ago
reply
Berkeley, CA
Share this story
Delete

Tree, Watercolor on paper, Size 15 X 11 inches.

1 Share
Tree, Watercolor on paper, Size 15 X 11 inches. submitted by /u/prashantsarkar to r/Art
[link] [comments]
Read the whole story
clumma
2 days ago
reply
Berkeley, CA
Share this story
Delete
Next Page of Stories