3551 stories
·
25 followers

50 Years of Electroweak Unification

1 Comment

The 50th anniversary of electroweak unification is coming up in a couple days, since Weinberg’s A Model of Leptons paper was submitted to PRL on October 17, 1967. For many years this was the most heavily cited HEP paper of all time, although once HEP theory entered its “All AdS/CFT, all the time” phase, at some point it was eclipsed by the 1997 Maldacena paper (as of today it’s 13118 Maldacena vs. 10875 Weinberg). Another notable fact about the 1967 paper is that it was completely ignored when published, only cited twice from 1967 to 1971.

The latest CERN Courier has (from Frank Close) a detailed history of the paper and how it came about. It also contains a long interview with Weinberg. It’s interesting to compare his comments about the current state of HEP with the ones from 2011 (see here), where he predicted that “If all they discover is the Higgs boson and it has the properties we expect, then No, I would say that the theorists are going to be very glum.”

Today he puts some hope in a non-renormalizable Majorana mass term for neutrinos as evidence for new physics. As for the future:

As to what is the true high-energy theory of elementary particles, Weinberg says string theory is still the best hope we have. “I am glad people are working on string theory and trying to explore it, although I notice that the smart guys such as Witten seem to have turned their attention to solid-state physics lately. Maybe that’s a sign that they are giving up, but I hope not.”

On this last sentiment, I have the opposite hope. He also shares what I think is a common hope for what will save the field (a smart graduate student with a new idea):

Weinberg also still holds hope that one day a paper posted in the arXiv preprint server by some previously unknown graduate student will turn the SM on its head – a 21st century model of particles “that incorporates dark matter and dark energy and has all the hallmarks of being a correct theory, using ideas no one had thought of before”.

Perhaps current training of graduate students in theory should be rethought, to optimize for this.

Read the whole story
stefanetal
3 days ago
reply
Get cited twice in the first 4 years!
Northern Virginia
stefanetal
1 day ago
I checked google scholar, and the paper has more cites in its first four years. Still, takes until 't Hooft's 1971 'Renormalizable lagrangians for massive Yang-Mills fields" to get going. https://scholar.google.com/scholar?hl=en&as_sdt=5%2C47&sciodt=0%2C47&cites=15029070178751176831&scipsc=&as_ylo=1966&as_yhi=1969
Share this story
Delete

Source: Deloitte Breach Affected All Company Email, Admin Accounts

1 Comment and 2 Shares

Deloitte, one of the world’s “big four” accounting firms, has acknowledged a breach of its internal email systems, British news outlet The Guardian revealed today. Deloitte has sought to downplay the incident, saying it impacted “very few” clients. But according to a source close to the investigation, the breach dates back to at least the fall of 2016, and involves the compromise of all administrator accounts at the company as well as Deloitte’s entire internal email system.

deloitte

In a story published Monday morning, The Guardian said a breach at Deloitte involved usernames, passwords and personal data on the accountancy’s top blue-chip clients.

“The Guardian understands Deloitte clients across all of these sectors had material in the company email system that was breached,” The Guardian’s Nick Hopkins wrote. “The companies include household names as well as US government departments. So far, six of Deloitte’s clients have been told their information was ‘impacted’ by the hack.”

In a statement sent to KrebsOnSecurity, Deloitte acknowledged a “cyber incident” involving unauthorized access to its email platform.

“The review of that platform is complete,” the statement reads. “Importantly, the review enabled us to understand precisely what information was at risk and what the hacker actually did and to determine that only very few clients were impacted [and] no disruption has occurred to client businesses, to Deloitte’s ability to continue to serve clients, or to consumers.”

However, information shared by a person with direct knowledge of the incident said the company in fact does not yet know precisely when the intrusion occurred, or for how long the hackers were inside of its systems.

This source, speaking on condition of anonymity, said the team investigating the breach focused their attention on a company office in Nashville known as the “Hermitage,” where the breach is thought to have begun.

The source confirmed The Guardian reporting that current estimates put the intrusion sometime in the fall of 2016, and added that investigators still are not certain that they have completely evicted the intruders from the network.

Indeed, it appears that Deloitte has known something was not right for some time. According to this source, the company sent out a “mandatory password reset” email on Oct. 13, 2016 to all Deloitte employees in the United States. The notice stated that employee passwords and personal identification numbers (PINs) needed to be changed by Oct. 17, 2016, and that employees who failed to do so would be unable to access email or other Deloitte applications. The message also included advice on how to pick complex passwords:

A screen shot of the mandatory password reset email Deloitte sent to all U.S. employees in Oct. 2016, around the time sources say the breach was first discovered.

A screen shot of the mandatory password reset message Deloitte sent to all U.S. employees in Oct. 2016, around the time sources say the breach was first discovered.

The source told KrebsOnSecurity they were coming forward with information about the breach because, “I think it’s unfortunate how we have handled this and swept it under the rug. It wasn’t a small amount of emails like reported. They accessed the entire email database and all admin accounts. But we never notified our advisory clients or our cyber intel clients.”

“Cyber intel” refers to Deloitte’s Cyber Intelligence Centre, which provides 24/7 “business-focused operational security” to a number of big companies, including CSAA Insurance, FedExInvesco, and St. Joseph’s Healthcare System, among others.

This same source said forensic investigators identified several gigabytes of data being exfiltrated to a server in the United Kingdom. The source further said the hackers had free reign in the network for “a long time” and that the company still does not know exactly how much total data was taken.

In its statement about the incident, Deloitte said it responded by “implementing its comprehensive security protocol and initiating an intensive and thorough review which included mobilizing a team of cyber-security and confidentiality experts inside and outside of Deloitte.” Additionally, the company said it contacted governmental authorities immediately after it became aware of the incident, and that it contacted each of the “very few clients impacted.”

“Deloitte remains deeply committed to ensuring that its cyber-security defenses are best in class, to investing heavily in protecting confidential information and to continually reviewing and enhancing cyber security,” the statement concludes.

Deloitte has not yet responded to follow-up requests for comment.  The Guardian reported that Deloitte notified six affected clients, but Deloitte has not said publicly yet when it notified those customers.

Deloitte has a significant cybersecurity consulting practice globally, wherein it advises many of its clients on how best to secure their systems and sensitive data from hackers. In 2012, Deloitte was ranked #1 globally in security consulting based on revenue.

Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a private company based in the United Kingdom. According to the company’s Web site, Deloitte has more than 263,000 employees at member firms delivering services in audit and insurance, tax, consulting, financial advisory, risk advisory, and related services in more than 150 countries and territories. Revenues for the fiscal year 2017 were $38.8 billion.

The breach at the big-four accountancy comes on the heels of a massive breach at big-three consumer credit bureau Equifax. That incident involved several months of unauthorized access in which intruders stole Social Security numbers, birth dates, and addresses on 143 million Americans.

This is a developing story. Any updates will be posted as available, and noted with update timestamps.

Read the whole story
stefanetal
18 days ago
reply
How is this not a bigger story?
Northern Virginia
Share this story
Delete

A paper, and publishing

1 Comment
Even at my point in life, the moment of publishing an academic paper is a one to celebrate, and a moment to reflect.

The New-Keynesian Liquidity Trap is published in the Journal of Monetary Economics -- online, print will be in December. Elsevier (the publisher) allows free access and free pdf downloads at the above link until November 9, and encourages authors to send links to their social media contacts. You're my social media contacts, so enjoy the link and download freely while you can!

The paper is part of the 2012-2013 conversation on monetary and fiscal policies when interest rates are stuck at zero -- the "zero bound" or "liquidity trap." (Which reprised an earlier 2000-ish conversation about Japan.)

At the time, new-Keynesian models and modelers were turning up all sorts of fascinating results, and taking them seriously enough to recommend policy actions. The Fed can strongly stimulate the economy with promises to hold interest rates low in the future. Curiously, the further in the future the promise, the more stimulative.  Fiscal policy, even totally wasted spending, can have huge multipliers. Broken windows and hurricanes are good for the economy. And though price stickiness is the central problem in the economy, lowering price stickiness makes matters worse. (See the paper for citations.)

The paper shows how tenuous all these predictions are. The models have multiple solutions, and the answer they give comes down to an almost arbitrary choice of which solution to pick. The standard choice implies a downward jump in the price level when the recession starts, which requires the government to raise taxes to pay off a windfall to government bondholders. Picking equilibria that don't have this price level jump, and don't require a jump to large fiscal surpluses (which we don't see) I overturn all the predictions. Sorry, no magic. If you want a better economy, you have to work on supply, not demand.

Today's thoughts, though, are about the state of academic publication.

I wrote the paper in the spring and summer of 2013, posted it to the internet, and started giving talks. Here's the story of its publication:

September 2013. Submitted to AER; NBER and SSRN working papers issued. Blog post.
June 2014. Rejected from AER. 3 good referee reports and thoughtful editor report.
October 2014. Submit revision to QJE.
December 2014. Rejected from QJE. 3 more thoughtful referee reports and editor report.
January 2015. Submit revision to JME.
April 2016. Revise and resubmit from JME. 3 referee reports and long editor report.
June 2016. Send revision to JME
July 2017. Accept with minor revisions from JME. Many (good) comments from editor
August 2017. Final revision to JME
September 2017. Proofs, publication online.
December 2017. Published.

This is about typical. Most of my papers are rejected at 2-3 journals before they find a home, and 3-5 years from first submission to publication is also typical. It's typical for academic publishing in general. Parts of this process went much faster than usual. Three months for a full evaluation at QJE is fast. And once accepted, my paper sped through the JME. Another year or two in the pipeline between acceptance and publication is typical.

Lessons and thoughts?

  • Academic journal publication is not a useful part of communication among researchers or the communication between research and policy. 

Anyone doing research on zero bound in new-Keynesian models in the last 4 years, and carrying on this conversation, interacted with the working paper version of my paper (if at all), not the published version. Any work relying only on published research is hopelessly out of date.

Interest rates lifted off the zero bound quite a while ago, so in the policy conversation this publication at best goes into the shelf of ideas to be revisited if the next recession repeats the last one with an extended period of zero interest rates , and if we see repeated invocation of the rather magical predictions of new-Keynesian models to cure it. If the next recession is a stagflation or a sovereign debt crisis, you're on your own.

Rather than means of communication,

  • Journal publications have become the archive, 

the ark, the library, the place where final, and perfected versions of papers are carved in stone for future generations. (Some lucky papers that make it to graduate reading lists more than 5-10 years after their impact will be read in final form, but not most.)

And this paper is perfected. The comments of nine very sharp reviewers and three thoughtful editors have improved it substantially, along with at dozens of drafts.  Papers are a conversation, and it does take a village.  The paper also benefitted from extensive comments at workshops, and several long email conversations with colleagues.

The passage of time has helped as well. When I go back to a paper after 6 months to a year, I find all sorts of things that can be clearer. Moreover, in the time between first submission and last revision, I wrote four new papers in the same line, and insights from those permeate back to this one.

So, in the end, though the basic points are the same, the exposition is much better.  It's a haiku.  Every word counts.

But such perfection comes at a big cost, in the time of editors and referees, my time, and most of all the cost that the conversation has now moved on.

The sum length of nine referee reports, four reports by three editors, is much longer than the paper. Each one did a serious job, and clearly spent at least a day or two reading the paper and writing thoughtful comments. Moreover, though the reports were excellent, past the first three they by and large made the same points. Was all this effort really worthwhile? I think below on how to economize on referee time.

Of course, for younger people

  • Journal articles are a branding and sorting device. 

Many institutions give tenure, chairs, raises, and other professional advancement based at least in part on numbers and placement of publications. For that purpose, timeliness is less of a problem, but with a six year tenure clock at many places and five year lags, the sorting and branding function isn't working that well either. Maybe we should just have star ratings instead. I don't think the journals see this as their function, they'd rather that people read papers and made tenure decisions accordingly, so I won't comment a lot more.

There is some good news that this data point represents, relative to state of journal publishing 15-20 years ago. (See Glenn Ellison's superb "The slowdown in the economics publishing process," JSTORundated, one of my proudest moments as a JPE editor.)

  • Journals are doing fewer rounds, more desk rejection, more one round and up or out.  

Journals had gotten in to a rut of asking for round after round of revisions. Now there is a strong ethic of either rejecting the paper, or doing one round of revisions and then either publishing with minor changes or not. Related,

  • Journal editors are more decisive. 

Journal editors have become, well editors.  The referees provide advice, but the editor thinks about it, decides which advice is good and not, and makes the final call. Editors used to defer decisions to referees, which is part of the reason why there were endless revisions.  This change is very good. Referees have little incentive to bring the process to a close, and they don't see the pipeline of papers to the journal. They are not in a good position to find the right balance of perfection and timeliness.

In my case, editors were very active. The referees wrote thoughtful reports, but largely made similar points. In fact, the strongest advice to reject came at the JME. But the AER and QJE editors were not impressed in the end by the paper, and the JME editor was.

So, with this state of affairs in mind, how might we all work to improve journals and the publication process?

I will take for granted that greater speed, and making journals more effective at communication and not just archiving and ranking is important. For one reason, to the extent that they continue to lose the communication function, people won't send articles there. Already you can notice that after tenure, more and more economists start publishing in conference volumes, invited papers, edited volumes, and other outlets. (blogs!) The fraction willing to take on this labor of love for journal publication declines quickly with age.  Research productivity and creativity does not take quite such a parallel decline. (I hope!)

Suggestion one:

  • Adopt the golden rule of refereeing

Around any economist cocktail party, there is a lot of whining that journals should do x y and z to speed things up. I start with what you and I can do. It is: do unto others as you would have them do unto you. If you complain about slow journals, well, how quickly do you turn around reports?

My recommendation, which is the rule I try to follow: Answer the email within a day. Spend an hour or two with the paper, and decide if you will referee it or not. If not, say so that day. If you can give a quick reaction behind your reason, that helps editors. And suggest a few other referees. Often editors aren't completely up to date on just who has written what and who is an ideal fit. If you're not the ideal fit, then help the editor by finding a better fit, and do it right a way.

If you agree to do a report, do it within a week. If you can't do it this week, you're not likely to be able to do it 5 weeks from now, and say no.

More suggestions:

  • Reuse referee reports
Do we really need nine referee reports to evaluate one paper? I always offer editors of journals to whom I send rejected papers the option of using the existing referee reports, along with my response as to how I have incorporated or not their suggestions. Nobody has ever taken me up on this offer. Why not? Especially now that editors are making more decisions? Some people mistakenly view publication as a semi-judicial proceeding, and authors have a "right" to new opinions. Sorry, journals are there to publish papers. 

Why not open refereeing? The report, and author's response, go to a public repository that others can see. Why not let anyone comment on papers? Authors can respond. Often the editor doesn't know who the best person is to referee a paper. Maybe a conference discussant has a good insight. At least one official reviewer could benefit from collecting such information. Some science journals do this. 

Some people would hate this. OK, but perhaps that should be a choice. Fast and public, or slow and private. 

While we're at it, what about
  • Simultaneous submission. Competition (heavens!)  

Journals insist that you only send to one journal at a time. And then wait a year or more to hear what they want to do with it. Especially now that we are moving towards the editor-centric system, and the central question is a match with editor's tastes, why not let journal editors share reviewer advice and compete for who wants to publish it? By essentially eliminating the sequential search for a sympathetic editor, this could speed up the process substantially.

I don't know why lower-ranked journals put up with this. It's the way that the top journals get the order flow of best papers. Why doesn't another journal say, you can send it to us at the same time as you send it to the AER. We'll respect their priority, but if they don't want it we will have first right. The AER almost does this with its field journals. But the JME could get more better papers faster by competing on this dimension.

The journals say they do this to preserve the value of their reviewer time. But with shared or open reviews, that argument falls apart.

We advocate competition elsewhere. Why not in our own profession?

Update: An email correspondent brings up a good point:

  • Journals should be the forum where competing views are hashed out. 
They should be part of the "process of formalizing well argued different points of views --  not refereeing "the truth." We dont know the truth. But hopefully get closer to it by arguing. [In public, and in the journals] The neverending refereeing [and editing and publishing] process is shutting down the conversation."

When I read well argued papers that I disagree with, I tend to write "I disagree with just about everything in this paper. But it's a well-argued case for a common point of view. If my devastating report does not convince the author, the paper should be published, and I should write up my objections as a response paper." 




Read the whole story
stefanetal
25 days ago
reply
Maybe I should have tried harder to get my papers published, but I just left after my dissertations papers git a more than a 4 year run around. Encouraging to see that good stuff like this from a famous guy has the same hassles.

And I've had an editor of Econometrica call me about one of those papers 10 years after the fact, saying I should submit even after all that. The world is crazy.
Northern Virginia
Share this story
Delete

Peer review is younger than you think

1 Comment and 4 Shares

Via Ben Schmidt, the term becomes common only in the 1970s:

I’d like to see a detailed look at actual journal practices, but my personal sense is that editorial review was the norm until fairly recently, not review by a team of outside referees.  In 1956, for instance, the American Historical Review asked for only one submission copy, and it seems the same was true as late as 1970.  I doubt they made the photocopies themselves. Schmidt seems to suggest that the practices of government funders nudged the academic professions into more formal peer review with multiple referee reports.

Further research is needed (how about we ask some really old people?), at least if peer review decides it is worthy of publication.  Frankly I suspect such work would stand a better chance under editorial review.

In the meantime, here is a tweet from the I didn’t know she was on Twitter Judy Chevalier:

I have just produced a 28-page “responses to reviewer and editor questions” for a 39-page paper.

I’d rather have another paper from Judy.

By the way, scientific papers are getting less readable.

The post Peer review is younger than you think appeared first on Marginal REVOLUTION.

Read the whole story
stefanetal
30 days ago
reply
Yes, getting a history of journal practices would be nice. In econ the current system has unbelievable publication delays with many revise and resubmits. How and why we got here would be nice to know.
Northern Virginia
Share this story
Delete

Type M errors in the wild—really the wild!

1 Comment

Jeremy Fox points me to this article, “Underappreciated problems of low replication in ecological field studies,” by Nathan Lemoine, Ava Hoffman, Andrew Felton, Lauren Baur, Francis Chaves, Jesse Gray, Qiang Yu, and Melinda Smith, who write:

The cost and difficulty of manipulative field studies makes low statistical power a pervasive issue throughout most ecological subdisciplines. . . . In this article, we address a relatively unknown problem with low power: underpowered studies must overestimate small effect sizes in order to achieve statistical significance. First, we describe how low replication coupled with weak effect sizes leads to Type M errors, or exaggerated effect sizes. We then conduct a meta-analysis to determine the average statistical power and Type M error rate for manipulative field experiments that address important questions related to global change; global warming, biodiversity loss, and drought. Finally, we provide recommendations for avoiding Type M errors and constraining estimates of effect size from underpowered studies.

As with the articles discussed in the previous post, I haven’t read this article in detail, but of course I’m supportive of the general point, and I have every reason to believe that type M errors are a big problem in a field such as ecology where measurement is difficult and variation is high.

P.S. Steven Johnson sent in the above picture of a cat who is not in the wild, but would like to be.

The post Type M errors in the wild—really the wild! appeared first on Statistical Modeling, Causal Inference, and Social Science.

Read the whole story
stefanetal
31 days ago
reply
Not sure how to solve this given incentives for researchers and budget constraints. And if only fully powered expensive studies are funded, then most researchers will be out of research.
Northern Virginia
Share this story
Delete

Social media are making price gouging too difficult these days

1 Comment

That is the topic of my latest Bloomberg column.  Here is one bit:

Let’s say bottled water was selling at $42.96 a case at the local Best Buy, as shown in this photo. A customer can take out his or her smartphone, snap a photo and post it on social media. The photo may go viral, and many people, including the legal authorities, will be mad at the company.

The reluctance to raise prices is especially strong for nationally branded stores. A local merchant may not care much if people in Iowa are upset at his prices, but major companies will fear damage to their national reputations. The short-term return from selling the water at a higher price is dwarfed by the risk to their business prospects. More and more of the value of business capital is intangible capital, more than 84 percent of the S&P 500 by some estimates. That’s why Best Buy so quickly apologized for its store selling the water at such a high price, blaming the incident on an overzealous local manager.

Consider an alternative: Instead of raising prices to very high levels, let’s say that the local big-box store sells out quickly during an emergency and has empty shelves for water. If those photos circulate, they will be interpreted as signs of general tragedy and want, rather than selfish corporate behavior. It’s too subtle an image to snap the price tag at pre-storm levels, contrast it with the empty shelves, and lecture your Facebook friends about the workings of market-clearing supply and demand and the virtues of flexibly adjusting prices.

Beware the culture of the image!  As I’ve said before, we should levy a micro-tax on photos on Twitter.

Here is Don Boudreaux on price gouging.  Here is David Henderson on price gouging.  I agree with them both.

The post Social media are making price gouging too difficult these days appeared first on Marginal REVOLUTION.

Read the whole story
stefanetal
42 days ago
reply
Price gouging is mostly bad IF there is also a (formal or informal) rationing system, say if the store knows which customers usually buy what or if people choose quantities based on some fair need given circumstances. Otherwise not. It depends on the market structure and sociology.
Northern Virginia
duerig
40 days ago
It is also bad in an unequal society where some people have more money to spend on casual interests than others do on dire needs. Actually, I think more generally that just as economic theory handles cases of abundance poorly, so too does it handle cases of extreme scarcity poorly. It works best when something is moderately scarce. When a commodity is in short supply but a moderate effort will add to the supply. When a commodity cannot be manufactured or a commodity is in limitless supply, those are the times when economics ceases to be a good guide.
Share this story
Delete
Next Page of Stories