1-dan master of the unyielding fist of Bayesian inference
3163 stories
·
0 followers

Albino Vampire Bat, Digital, 1441x1920px

1 Share
Albino Vampire Bat, Digital, 1441x1920px submitted by /u/seasparkle to r/Art
[link] [comments]
Read the whole story
clumma
4 hours ago
reply
Berkeley, CA
Share this story
Delete

Rules and Strategies in the Fed’s New Monetary Policy Report

1 Share

The Fed’s Monetary Policy Report released last Friday devotes a lot of space to monetary policy rules. This is the third time in a row that the monetary policy report has included such discussions, the first being in July 2017 and the second in February 2018.  Compared with the previous two monetary policy reports, this Report adds new material on policy rules, and is joined with a helpful discussion of policy rules now integrated into the Monetary Policy Principles and Practice section of the Fed’s web page.

All this represents progress in my view. It would be good if the new material generates some questions and answers in the Senate and House hearings with Fed Chair Jay Powell this week. Having such a discussion is one of the purposes of the bills in Congress under which the Fed would report on policy rules and strategies.

As with Fed minutes, there is something to be gained from examining the similarities and differences compared the most recent and previous reports, though the process of reporting is probably still evolving and a purpose of policy rules is that they entail less not more fine-tuning.

The new report presents (p. 37) the same key principles of policy embedded in the Taylor rule and other policy rules as discussed in previous reports: “Policy rules can incorporate key principles of good monetary policy. One key principle is that monetary policy should respond in a predictable way to changes in economic conditions. A second key principle is that monetary policy should be accommodative when inflation is below the desired level and employment is below its maximum sustainable level; conversely, monetary policy should be restrictive when the opposite holds. A third key principle is that, to stabilize inflation, the policy rate should be adjusted by more than one-for-one in response to persistent increases or decreases in inflation.” The section “Principles for the Conduct of Monetary Policy” on the Fed’s web site discusses in more detail how these principles relate to policy rules and explains the rationale for the third principle, sometimes called the Taylor principle.

Another similarity is that that the new report focuses on the same five rules as in February: the “well-known Taylor (1993) rule” and “Other rules” which “include the ‘balanced approach’ rule [Taylor rule with a higher coefficient on the output variable], the ‘adjusted Taylor (1993)’ rule, the ‘price level’ rule, and the ‘first difference’rule.”

The paragraph with the title Monetary policy rules (p. 3) is identical to the previous reports.  Among other things, it states that monetary policymakers “routinely consult monetary policy rules.”  But later paragraphs differ (italics added to show this):

Feb 2018: “However, the use and interpretation of such prescriptions require careful judgments about the choice and measurement of the inputs to these rules as well as the implications of the many considerations these rules do not take into account. (pp. 31-32)

July 2018” “However, the use and interpretation of such prescriptions require, among other considerations, careful judgments about the choice and measurement of the inputs to these rules such as estimates of the neutral interest rate, which are highly uncertain (p 36)

That is, the latest report focuses on uncertainty about the “neutral” or “equilibrium real” interest rate while the earlier reports also focused on uncertainty about the neutral  unemployment rate and the measures of inflation. Indeed, the latest report has a new table (p. 41) shown below, and long discussion reporting on recent research on the neutral rate. Note that the point estimates range from 0.1 percent to 1.8 percent.

I think it is significant that the discussion of the neutral rate is placed within the discussion of policy rules in the report. Like many aspects of uncertainty, this particular uncertainty has profound effects on policy making whether policy is rules-based or not. However, the discussion of the policy implications of this uncertainty is much clearer and more informative when it falls, as in this report, within a framework of policy rules.

That there is a wide range of uncertainty is illustrated in this time-series diagram drawn from the report. Note that the the estimated neutral rate was much higher in the years from 2002 to 2007 before the Great Recession indicating that, according to these estimates, the “two-low-for-too long” period cannot be rationalized as due to a decline in the neutral rate.

There is more on policy rules in the report and also, as mentioned above, on the the web page. I would note in particular the section Policy Rules and How Policymakers Use Them which discusses alternative policy rules, the section on Challenges Associated with Using Rules to Make Monetary Policy which delves  into issues that the Fed faces when it implements rules, and the section Monetary Policy Strategies of Major Central Banks which contains a good discussion of what is happening abroad with the conclusion that “other major central banks use policy rules in a similar fashion.”  This section is very important when one considers monetary policy normalization and monetary reform on a global scale, which is entirely appropriate in today’s integrated world economy



Read the whole story
clumma
1 day ago
reply
Berkeley, CA
Share this story
Delete

An Illustrated Proof of the CAP Theorem

1 Share
submitted by /u/abhimanyusaxena to r/programming
[link] [comments]
Read the whole story
clumma
3 days ago
reply
Berkeley, CA
Share this story
Delete

Himalayan Black-Lored Tit, Ballpoint Pen on Bristol Board Paper, A4

1 Share
Himalayan Black-Lored Tit, Ballpoint Pen on Bristol Board Paper, A4 submitted by /u/wodw to r/Art
[link] [comments]
Read the whole story
clumma
4 days ago
reply
Berkeley, CA
Share this story
Delete

Crown Shyness

1 Share
Crown Shyness submitted by /u/abrieabrie to r/wikipedia
[link] [comments]
Read the whole story
clumma
5 days ago
reply
Berkeley, CA
Share this story
Delete

Customers who liked this quantum recommendation engine might also like its dequantization

1 Comment and 2 Shares

I’m in Boulder, CO right now for the wonderful Boulder summer school on quantum information, where I’ll be lecturing today and tomorrow on introductory quantum algorithms.  But I now face the happy obligation of taking a break from all the lecture-preparing and schmoozing, to blog about a striking new result by a student of mine—a result that will probably make an appearance in my lectures in well.

Yesterday, Ewin Tang—an 18-year-old who just finished a bachelor’s at UT Austin, and who will be starting a PhD in CS at the University of Washington in the fall—posted a preprint entitled A quantum-inspired classical algorithm for recommendation systems. Ewin’s new algorithm solves the following problem, very loosely stated: given m users and n products, and incomplete data about which users like which products, organized into a convenient binary tree data structure; and given also the assumption that the full m×n preference matrix is low-rank (i.e., that there are not too many ways the users vary in their preferences), sample some products that a given user is likely to want to buy.  This is an abstraction of the problem that’s famously faced by Amazon and Netflix, every time they tell you which books or movies you “might enjoy.”  What’s striking about Ewin’s algorithm is that it uses only polylogarithmic time: that is, polynomial in log(m), log(n), the matrix rank, and the relevant error parameters.  Admittedly, the polynomial involves exponents of 33 and 24: so, not exactly “practical”!  But it seems likely to me that the algorithm will run much, much faster in practice than it can be guaranteed to run in theory.  Indeed, if any readers would like to implement the thing and test it out, please let us know in the comments section!

As the title suggests, Ewin’s algorithm was directly inspired by a quantum algorithm for the same problem, which Kerenidis and Prakash (henceforth KP) gave in 2016, and whose claim to fame was that it, too, ran in polylog(m,n) time.  Prior to Ewin’s result, the KP algorithm was arguably the strongest candidate there was for an exponential quantum speedup for a real-world machine learning problem.  The new result thus, I think, significantly changes the landscape for quantum machine learning; note that whether KP gives a real exponential speedup was one of the main open problems mentioned in John Preskill’s survey on the applications of near-term quantum computers.  At the same time, Ewin’s result yields a new algorithm that can be run on today’s computers, that could conceivably be useful to those who need to recommend products to customers, and that was only discovered by exploiting intuition that came from quantum computing. So I’d consider this both a defeat and a victory for quantum algorithms research.

This result was the outcome of Ewin’s undergraduate thesis project (!), which I supervised. A year and a half ago, Ewin took my intro quantum information class, whereupon it quickly became clear that I should offer this person an independent project.  So I gave Ewin the problem of proving a poly(m,n) lower bound on the number of queries that any classical randomized algorithm would need to make to the user preference data, in order to generate product recommendations for a given user, in exactly the same setting that KP had studied.  This seemed obvious to me: in their algorithm, KP made essential use of quantum phase estimation, the same primitive used in Shor’s factoring algorithm.  Without phase estimation, you seemed to be stuck doing linear algebra on the full m×n matrix, which of course would take poly(m,n) time.  But KP had left the problem open, I didn’t know how to solve it either, and nailing it down seemed like an obvious challenge, if we wanted to establish the reality of quantum speedups for at least one practical machine learning problem.  (For the difficulties in finding such speedups, see my essay for Nature Physics, much of which is still relevant even though it was written prior to KP.)

Anyway, for a year, Ewin tried and failed to rule out a superfast classical algorithm for the KP problem—eventually, of course, discovering the unexpected reason for the failure!  Throughout this journey, I served as Ewin’s occasional sounding board, but can take no further credit for the result.  Indeed, I admit that I was initially skeptical when Ewin told me that phase estimation did not look essential after all for generating superfast recommendations—that a classical algorithm could get a similar effect by randomly sampling a tiny submatrix of the user preference matrix, and then carefully exploiting a variant of a 2004 result by Frieze, Kannan, and Vempala.  So when I was in Berkeley a few weeks ago for the Simons quantum computing program, I had the idea of flying Ewin over to explain the new result to the experts, including Kerenidis and Prakash themselves.  After four hours of lectures and Q&A, a consensus emerged that the thing looked solid.  Only after that gauntlet did I advise Ewin to put the preprint online.

So what’s next?  Well, one obvious challenge is to bring down the running time of Ewin’s algorithm, and (as I mentioned before) to investigate whether or not it could give a practical benefit today.  A different challenge is to find some other example of a quantum algorithm that solves a real-world machine learning problem with only a polylogarithmic number of queries … one for which the exponential quantum speedup is Ewin-proof!  The field is now wide open.  It’s possible that my Forrelation problem, which Raz and Tal recently used for their breakthrough oracle separation between BQP and PH, could be in an ingredient in such a separation.

Anyway, there’s much more to say about Ewin’s achievement, but I now need to run to lecture about quantum algorithms like Simon’s and Shor’s, which do achieve exponential speedups!  Please join my in offering hearty congratulations, see Ewin’s nicely-written paper for details, and if you have any questions for me or (better yet) Ewin, feel free to ask in the comments.

Read the whole story
clumma
5 days ago
reply
Berkeley, CA
Share this story
Delete
1 public comment
jepler
5 days ago
reply
"Ewin’s algorithm was directly inspired by a quantum algorithm for the same problem, which Kerenidis and Prakash (henceforth KP) gave in 2016, and whose claim to fame was that it, too, ran in polylog(m,n) time. Prior to Ewin’s result, the KP algorithm was arguably the strongest candidate there was for an exponential quantum speedup for a real-world machine learning problem. The new result thus, I think, significantly changes the landscape for quantum machine learning; note that whether KP gives a real exponential speedup was one of the main open problems mentioned in John Preskill’s survey on the applications of near-term quantum computers."
Earth, Sol system, Western spiral arm
Next Page of Stories