Inside The Mind That Built Google Brain: On Life, Creativity, And Failure

Source: The Huffington Post

(Photo: Jemal Countess/Getty)

Here’s a list of universities with arguably the greatest computer science programs: Carnegie Mellon, MIT, UC Berkeley, and Stanford. These are the same places, respectively, where Andrew Ng received his bachelor’s degree, his master’s, his Ph.D., and has taught for 12 years.

Ng is an icon of the artificial intelligence world with the pedigree to match, and he is not yet 40 years old. In 2011, he founded Google Brain, a deep-learning research project supercharged by Google’s vast stores of computing power and data. Delightfully, one of its most important achievements came when computers analyzing scores of YouTube screenshots were able to recognize a cat. (The New York Timesheadline: “How Many Computers to Identify a Cat? 16,000.”) As Ng explained, “The remarkable thing was that [the system] had discovered the concept of a cat itself. No one had ever told it what a cat is. That was a milestone in machine learning.”

Ng exudes a cheerful but profound calm. He happily discusses the various mistakes and failures of his career, the papers he read but didn’t understand. He wears identical blue oxford shirts each and every day. He is blushing but proud when a colleague mentions his adorable robot-themed engagement photo shoot with his now-wife, a surgical roboticist named Carol Reiley (note his shirt in the photo).

One-on-one, he speaks with a softer voice than anyone you know, though this has not hindered his popularity as a lecturer. In 2011, when he posted videos from his own Stanford machine learning course on the web, over 100,000 people registered. Within a year, Ng had co-founded Coursera, which is today the largest provider of open online courses. Its partners include Princeton and Yale, top schools in China and across Europe. It is a for-profit venture, though all classes are accessible for free. “Charging for content would be a tragedy,” Ng has said.

(Photo: Colson Griffith)

Then, last spring, a shock. Ng announced he was departing Google and stepping away from day-to-day involvement at Coursera. The Chinese tech giant Baidu was establishing an ambitious $300 million research lab devoted to artificial intelligence just down the road from Google’s Silicon Valley headquarters, and Andrew Ng would head it up.

At Baidu, as before, Ng is trying to help computers identify audio and images with incredible accuracy, in realtime. (On Tuesday, Baidu announced it had achieved the world’s best results on a key artificial intelligence benchmark related to image identification, besting Google and Microsoft.) Ng believes speech recognition with 99 percent accuracy will spur revolutionary changes to how humans interact with computers, and how operating systems are designed. Simultaneously, he must help Baidu work well for the millions of search users who are brand new to digital life. “You get queries [in China] that you just wouldn’t get in the United States,” Ng explained. “For example, we get queries like, ‘Hi Baidu, how are you? I ate noodles at a corner store last week and they were delicious. Do you think they’re on sale this weekend?’ That’s the query.” Ng added: “I think we make a good attempt at answering.”

Elon Musk and Stephen Hawking have been sounding alarms over the potential threat to humanity from advanced artificial intelligence. Andrew Ng has not. “I don’t work on preventing AI from turning evil for the same reason that I don’t work on combating overpopulation on the planet Mars,” he has said. AI is many decades away (if not longer) from achieving something akin to consciousness, according to Ng. In the meantime, there’s a far more urgent problem. Computers enhanced by machine learning are eliminating jobs long done by humans. The trend is only accelerating, and Ng frequently calls on policymakers to prepare for the socioeconomic consequences.

At Baidu’s new lab in Sunnyvale, Calif., we spoke to Andrew Ng for Sophia, a HuffPost project to collect life lessons from fascinating people. He explained why he thinks “follow your passion” is terrible career advice and he shared his strategy for teaching creativity; Ng discussed his failures and his helpful habits, the most influential books he’s read, and his latest thoughts on the frontiers of AI.

You recently said, “I’ve seen people learn to be more creative.” Can you explain?

The question is, how does one create new ideas? Is it those unpredictable lone acts of genius, people like Steve Jobs, who are special in some way? Or is it something that can be taught and that one can be systematic about?

I believe that the ability to innovate and to be creative are teachable processes. There are ways by which people can systematically innovate or systematically become creative. One thing I’ve been doing at Baidu is running a workshop on the strategy of innovation. The idea is that innovation is not these random unpredictable acts of genius, but that instead one can be very systematic in creating things that have never been created before.

In my own life, I found that whenever I wasn’t sure what to do next, I would go and learn a lot, read a lot, talk to experts. I don’t know how the human brain works but it’s almost magical: when you read enough or talk to enough experts, when you have enough inputs, new ideas start appearing. This seems to happen for a lot of people that I know.

When you become sufficiently expert in the state of the art, you stop picking ideas at random. You are thoughtful in how to select ideas, and how to combine ideas. You are thoughtful about when you should be generating many ideas versus pruning down ideas.

Now there is a challenge still — what do you do with the new ideas, how can you be strategic in how to advance the ideas to build useful things? That’s another whole piece.

Can you talk about your information diet, how you approach learning?

I read a lot and I also spend time talking to people a fair amount. I think two of the most efficient ways to learn, to get information, are reading and talking to experts. So I spend quite a bit of time doing both of them. I think I have just shy of a thousand books on my Kindle. And I’ve probably read about two-thirds of them.

At Baidu, we have a reading group where we read about half a book a week. I’m actually part of two reading groups at Baidu, each of which reads about half a book a week. I think I’m the only one who’s in both of those groups [laughter]. And my favorite Saturday afternoon activity is sitting by myself at home reading.

 

 

Let me ask about your early influences. Is there something your parents did for you that many parents don’t do that you feel had a lasting impact on your life?

I think when I was about six, my father bought a computer and helped me learn to program. A lot of computer scientists learned to program from an early age, so it’s probably not that unique, but I think I was one of the ones that was fortunate to have had a computer and could learn to start to program from a very young age.

Unlike the stereotypical Asian parents, my parents were very laid back. Whenever I got good grades in school, my parents would make a fuss, and I actually found that slightly embarrassing. So I used to hide them. [Laughter] I didn’t like showing my report card to my parents, not because I was doing badly but because of their reaction.

I was also fortunate to have gotten to live and work in many different places. I was born in the U.K., raised in Hong Kong and Singapore, and came to the U.S. for college. Then for my own studies, I have degrees from Carnegie Mellon, MIT, and Berkeley, and then I was at Stanford.

I was very fortunate to have moved to all these places and gotten to meet some of the top people. I interned at AT&T Bell Labs when it existed, one of the top labs, and then at Microsoft Research. I got to see a huge diversity of points of view.

Is there anything about your education or your early career that you would have done differently? Any lessons you’ve learned that people could benefit from?

I wish we as a society gave better career advice to young adults. I think that “follow your passion” is not good career advice. It’s actually one of the most terrible pieces of career advice we give people.

If you are passionate about driving your car, it doesn’t necessarily mean you should aspire to be a race car driver. In real life, “follow your passion” actually gets amended to, “Follow your passion of all the things that happen to be a major at the university you’re attending.”

But often, you first become good at something, and then you become passionate about it. And I think most people can become good at almost anything.

So when I think about what to do with my own life, what I want to work on, I look at two criteria. The first is whether it’s an opportunity to learn. Does the work on this project allow me to learn new and interesting and useful things? The second is the potential impact. The world has an infinite supply of interesting problems. The world also has an infinite supply of important problems. I would love for people to focus on the latter.

I’ve been fortunate to have repeatedly been able to find opportunities that had a lot of potential for impact and also gave me fantastic opportunities to learn. I think young people optimizing for these two things will often have the best careers.

Our team here has a mission of developing hard AI technologies, advanced AI technologies that let us impact hundreds of millions of users. That’s a mission I’m genuinely excited about.

 

 

Do you define importance primarily by the number of people who are impacted?

No, I don’t think the number is the only thing that’s important. Changing hundreds of millions of people’s lives in a significant way, I think that’s the level of impact that we can reasonably aspire to. That is one way of making sure we do work that isn’t just interesting, but that also has an impact.

You’ve talked previously about projects of yours that have failed. How do you respond to failure?

Well, it happens all the time, so it’s a long story. [Laughter] A few years ago, I made a list in Evernote and tried to remember all the projects I had started that didn’t work out, for whatever reason. Sometimes I was lucky and it worked out in a totally unexpected direction, through luck rather than skill.

But I made a list of all the projects I had worked on that didn’t go anywhere, or that didn’t succeed, or that had much less to show for it relative to the effort that we put into it. Then I tried to categorize them in terms of what went wrong and tried to do a pretty rigorous post mortem on them.

So, one of these failures was at Stanford. For a while we were trying to get aircraft to fly in formation to realize fuel savings, inspired by geese flying in a V-shaped formation. The aerodynamics are actually pretty solid. So we spent about a year working on making these aircraft fly autonomously. Then we tried to get the airplanes to fly in formation.

But after a year of work, we realized that there is no way that we could control the aircraft with sufficient accuracy to realize fuel savings. Now, if at the start of the project we had thought through the position requirements, we would have realized that with the small aircraft we were using, there is just no way we could do it. Wind gusts will blow you around far more than the precision needed to fly the aircraft in formation.

So one pattern of mistakes I’ve made in the past, hopefully much less now, is doing projects where you do step one, you do step two, you do step three, and then you realize that step four has been impossible all along. I talk about this specific example in the strategy innovation workshop I talked about. The lesson is to de-risk projects early.

I’ve become much better at identifying risks and assessing them earlier on. Now when I say things like, “We should de-risk a project early,” everyone will nod their head because it’s just so obviously true. But the problem is when you’re actually in this situation and facing a novel project, it’s much harder to apply that to the specific project you are working on.

The reason is these sorts of research projects, they’re a strategic skill. In our educational system we’re pretty good at teaching facts and procedures, like recipes. How do you cook spaghetti bolognese? You follow the recipe. We’re pretty good at teaching facts and recipes.

But innovation or creativity is a strategic skill where every day you wake up and it’s a totally unique context that no one’s ever been in, and you need to make good decisions in your completely unique environment. So as far as I can tell, the only was we know way to teach strategic skills is by example, by seeing tons of examples. The human brain, when you see enough examples, learns to internalize those rules and guidelines for making good strategic decisions.

Very often, what I find is that for people doing research, it takes years to see enough examples and to learn to internalize those guidelines. So what I’ve been experimenting with here is to build a flight simulator for innovation strategy. Instead of having everyone spend five years before you see enough examples, to deliver many examples in a much more compressed time frame.

Just as in a flight simulator, if you want to learn to fly a 747, you need to fly for years, maybe decades, before you see any emergencies. But in a flight simulator, we can show you tons of emergencies in a very compressed period of time and allow you to learn much faster. Those are the sorts of things we’ve been experimenting with.

When this lab first opened, you noted that for much of your career you hadn’t seen the importance of team culture, but that you had come to realize its value. Several months in, is there anything you’ve learned about establishing the right culture?

A lot of organizations have cultural documents like, “We empower each other,” or whatever. When you say it, everyone nods their heads, because who wouldn’t want to empower your teammates. But when they go back to their desks five minutes later, do they actually do it? It’s difficult for people to bridge the abstract and the concrete.

At Baidu, we did one thing for the culture that I think is rare. I don’t know of any organization that has done this. We created a quiz that describes to employees specific scenarios — it says, “You’re in this situation and this happens. What do you do: A, B, C, or D?”

No one has ever gotten full marks on this quiz the first time out. I think the quiz interactivity, asking team members to apply specifics to hypothetical scenarios, has been our way of trying to connect the abstract culture with the concrete; what do you actually do when a teammate comes to you and does this thing?

What are some books that had a substantial impact on your intellectual development?

Recently I’ve been thinking about the set of books I’d recommend to someone wanting to do something innovative, to create something new.

The first is “Zero to One“ by Peter Thiel, a very good book that gives an overview of entrepreneurship and innovation.

We often break down entrepreneurship into B2B (“business to business,” i.e., businesses whose customers are other businesses) and B2C (“business to consumer”). For B2B, I recommend “Crossing the Chasm.” For B2C, one of my favorite books is “The Lean Startup,” which takes a narrower view but it gives one specific tactic for innovating quickly. It’s a little narrow but it’s very good in the area that it covers.

Then to break B2C down even further, two of my favorites are “Talking to Humans,” which is a very short book that teaches you how to develop empathy for users you want to serve by talking to them. Also, “Rocket Surgery Made Easy.” If you want to build products that are important, that users care about, this teaches you different tactics for learning about users, either through user studies or by interviews.

Then finally there is “The Hard Thing about Hard Things.“ It’s a bit dark but it does cover a lot of useful territory on what building an organization is like.

For people who are trying to figure out career decisions, there’s a very interesting one: “So Good They Can’t Ignore You.” That gives a valuable perspective on how to select a path for one’s career.

Do you have any helpful habits or routines?

I wear blue shirts every day, I don’t know if you know that. [laughter] Yes. One of the biggest levers on your own life is your ability to form useful habits.

When I talk to researchers, when I talk to people wanting to engage in entrepreneurship, I tell them that if you read research papers consistently, if you seriously study half a dozen papers a week and you do that for two years, after those two years you will have learned a lot. This is a fantastic investment in your own long term development.

But that sort of investment, if you spend a whole Saturday studying rather than watching TV, there’s no one there to pat you on the back or tell you you did a good job. Chances are what you learned studying all Saturday won’t make you that much better at your job the following Monday. There are very few, almost no short-term rewards for these things. But it’s a fantastic long-term investment. This is really how you become a great researcher, you have to read a lot.

People that count on willpower to do these things, it almost never works because willpower peters out. Instead I think people that are into creating habits — you know, studying every week, working hard every week — those are the most important. Those are the people most likely to succeed.

For myself, one of the habits I have is working out every morning for seven minutes with an app. I find it much easier to do the same thing every morning because it’s one less decision that you have to make. It’s the same reason that my closet is full of blue shirts. I used to have two color shirts actually, blue and magenta. I thought that’s just too many decisions. [Laughter] So now I only wear blue shirts.

 

You’ve urged policymakers to spend time thinking about a future where computing and robotics have eliminated some substantial portion of the jobs people have now. Do you have any ideas about possible solutions?

It’s a really tough question. Computers are good at routine repetitive tasks. Thus far, the main things that computers have been good at automating are tasks where you kind of do the same thing day after day.

Now this can be at multiple points on the spectrum. Humans work on an assembly line, making the same motion for months on end, and now robots are doing some of that work. A midrange challenge might be truck-driving. Truck drivers do very similar things day after day, so computers are trying to do that too. It’s harder than most people think, but automated driving might happen in the next decade or so, we don’t know. Then, even higher-end things, like some radiologists read the same types of x-rays over and over each day. Again, computers may have traction in those areas.

But for the social tasks which are non-routine and non-repetitive, those are the tasks that humans will be better at than computers for quite a period of time, I think. In many of our jobs we do different things every day. We meet different people, we have to arrange different things, solve problems differently. Those things are relatively difficult for computers to do, for now.

The challenge that faces us is that, when the U.S. transformed from an agricultural to a manufacturing and services economy, we had people move from one routine task, such as farming, to a different routine task, such as manufacturing or working call service centers. A large fraction of the population has made that transition, so they’ve been okay, they’ve found other jobs. But many of their jobs are still routine and repetitive.

The challenge that faces us is to find a way to scalably teach people to do non-routine non-repetitive work. Our education system, historically, has not been good at doing that at scale. The top universities are good at doing that for a relatively modest fraction of the population. But a lot of our population ends up doing work that is important but also routine and repetitive. That’s a challenge that faces our educational system.

I think it can be solved. That’s one of the reasons why I’ve been thinking about teaching innovation strategy, teaching creativity strategy. We need to enable a lot of people to do non-routine, non-repetitive tasks. These tactics for teaching innovation and creativity, these flight simulators for innovation, could be one way to get there. I don’t think we’ve figured out yet how to do it, but I’m optimistic it can be done.

You’ve said, “Engineers in China work much harder than the average Silicon Valley engineer. Engineers in Silicon Valley at startups work really hard. At mature companies, I don’t see the same intensity as you do in startups and at Baidu.” Why do you think that is?

I don’t know. I think the individual engineers in China are great. The individual engineers in Silicon Valley are great. The difference I think is the company. The teams of engineers at Baidu tend to be incredibly nimble.

There is much less appreciation for the status quo in the Chinese internet economy and I think there’s a much bigger sense that all assumptions can be challenged and everything is up for grabs. The Chinese internet ecosystem is very dynamic. Everyone sees huge opportunity, everyone sees massive competition. Stuff changes all the time. New inventions arise, and large companies will one day suddenly jump into a totally new business sector.

To give you an idea, here in the United States, if Facebook were to start a brand new web search engine, that might feel like a slightly strange thing to do. Why would Facebook build a search engine? It’s really difficult. But that sort of thing is much more thinkable in China, where there is more of an assumption that there will be new creative business models.

 

 

This seems to suggests a different management culture, where you can make important decisions quickly and have them be intelligent and efficient and not chaotic. Is Baidu operating in a unique way that you feel is particularly helpful to its growth?

Gosh, that’s a good question. I’m trying to think what to point to. I think decision making is pushed very far down in the organization at Baidu. People have a lot of autonomy, and they are very strategic. One of the things I really appreciate about the company, especially the executives, is there’s a very clear-eyed view of the world and of the competition.

When executives meet, and the way we speak with the whole company, there is a refreshing absence of bravado. The statements that are made internally — they say, “We did a great job on that. We’re not so happy with those things. This is going well. This is not going well. These are the things we think we should emphasize. And let’s do a post-mortem on the mistakes we made.” There’s just a remarkable lack of bravado, and I think this gives the organization great context on the areas to innovate and focus on.

You’re very focused on speech recognition, among other problems. What are the challenges you’re facing that, when solved, will lead to a significant jump in the accuracy of speech recognition technology?

We’re building machine learning systems for speech recognition. Some of the machine learning technologies we’re using now have been around for decades. It was only in the last several years that they’ve really taken off.

Why is that? I often make an analogy to building a rocket ship. A rocket ship is a giant engine together with a ton of fuel. Both need to be really big. If you have a lot of fuel and a tiny engine, you won’t get off the ground. If you have a huge engine and a tiny amount of fuel, you can lift up, but you probably won’t make it to orbit. So you need a big engine and a lot of fuel.

The reason that machine learning is really taking off now is that we finally have the tools to build the big rocket engine — that is giant computers, that’s our rocket engine. And the fuel is the data. We finally are getting the data that we need.

The digitization of society creates a lot of data and we’ve been creating data for a long time now. But it was just in the last several years we’ve been finally able to build big enough rocket engines to absorb the fuel. So part of our approach, not the whole thing, but a lot of our approach to speech recognition is finding ways to build bigger engines and get more rocket fuel.

For example, here is one thing we did, a little technical. Where do you get a lot of data for speech recognition? One of the things we did was we would take audio data. Other groups use maybe a couple thousand hours of data. We use a hundred thousand hours of data. That is much more rocket fuel than what you see in academic literature.

Then one of the things we did was, if we have an audio clip of you saying something, we would take that audio clip of you and add background noise to it, like a clip recorded in a cafe. So we synthesize an audio clip of what you would sound like if you were speaking in a cafe. By synthesizing your voice against lots of backgrounds, we just multiply the amount of data that we have. We use tactics like that to create more data to feed to our machines, to feed to our rocket engines.

One thing about speech recognition: most people don’t understand the difference between 95 and 99 percent accurate. Ninety-five percent means you get one-in-20 words wrong. That’s just annoying, it’s painful to go back and correct it on your cell phone.

Ninety-nine percent is game changing. If there’s 99 percent, it becomes reliable. It just works and you use it all the time. So this is not just a four percent incremental improvement, this is the difference between people rarely using it and people using it all the time.

So what is the hurdle to 99 percent at this point?

We need even bigger rocket engines and we still need even more rocket fuel. Both are still constrained and the two have to grow together. We’re still working on pushing that boundary.

What are the best-kept secrets of great programmers?

Source:Quora

Andy Crews

Andy Crews, Principal Engineer with 15 years of professional software development experie…

96.8k ViewsUpvoted by Miguel Paraz, Programming since 1985 at age 11.

Learn when to tell your manager little white lies.
Well, they are not really lies, but rather this is about interpreting questions the right way.

Managers need to know how long it will take to implement a new feature or fix a bug. Often they are under pressure, but even when that’s not the case, they still need that information for prioritizing and scheduling. Getting this information from a developer is where some miscommunication can occur in my experience.

“How long will it take to implement this feature?” Often there is some explicit or implied time pressure in the question. “You know, I’m really in the hot seat on this one with our biggest customer.”

Once upon a time, I interpreted this question at face value: If I were to work on this feature starting now, what is the minimum time it would take me to have something that met the requirements? I answered that question, but I ignored other relevant considerations:

  • What kind of tests should I add?
  • Is there some existing code that can be reused?
  • Is there some refactoring that should be done to implement this feature in a way that it can be maintained and enhanced in the future?

Ignoring these questions and getting a feature working quickly makes lots of people happy in the short term, but all of these aspects affect the long term quality and maintainability of the software. When they are ignored it leads to Technical debt in the code base.

Technical debt makes it ever more difficult to add new features and produce high-quality software. In my experience, if one doesn’t address testing, refactoring and code reuse during the implementation, they are never addressed. There is always another problem to solve around the corner. Each time we ignore them, we take one more step towards the inevitable “sea of complexity”. Each step makes the next enhancement or feature more difficult. When you finally reach the “sea of complexity” you realize you can no longer enhance or support the software using the existing code base and the only solution is to throw it all away and start over.

So when I hear the question “How long will it take to implement this feature?” Before I answer, I translate it in my head into the question “How long will it take to implement this feature with high quality, including adding unit tests, refactoring existing code, and integrating it into our existing code base in such a way that can be maintained and enhanced?”

Jens Rantil

Jens Rantil, Developer, life hacker and inspired Swede

276k ViewsUpvoted by Austin Kelleher, Software Engineer
Answer featured in Forbes and The Huffington Post.
  • Most of the time, using inheritance is a bad object oriented design in the long run. It reduces reusability and testability of code. Consider using composition and interfaces instead. See No, inheritance is not the way to achieve code reuse!. Related;
  • Avoid introducing an interface until you are comfortable in your domain. “Premature interfacing” can also lead to design issues down the road.
  • Deep nested code (both intra-function and inter-function) is 1) harder to maintain, 2) more prone to bugs and 3) harder to reuse. Shallow code hierarchies generally makes a better foundation for reuse and testing. See note about inheritance above.
  • Estimating time is hard. One reason why Scrum and sprints are used in many places.
  • Proper encryption is hard. Don’t invent it yourself unless you have a good reason to.
  • Side-effect free logic is nice. It makes it easier to reason about state (see below) and generally simplifies automated testing.
  • Learn to reason around state and lifecycles. See Jens Rantil’s Hideout.
  • Concurrency can be hard without the right primitives. Threadpools, queues, observables, immutability and actors can sometimes help a lot.
  • Premature optimization is the root of all evil. A good general development process is: 1) Get it to work. 2) Make the code beautiful. 3) Optimize.
  • Know your basic data structures and understand time complexity. It’s an effective way of making your code much faster without adding complexity.
  • Practise back-of-the-envelope calculations. How many items will a piece of code generally hold in memory? Related;
  • An application will eventually break; Bad deploy, unintended behaviour, unintended input or unintended external load. Plan for that. This includes making sure you log uncaught exceptions, test a deploy works after it’s out (and potentially roll back), should run tests continously, but also should make sure to set (sane!) limits on all in-memory queues and thread pools. Related;
  • If you monitor the size of a queue, it’s generally always full or empty. Plan for that. Related;
  • Networks and external services should always be expected to be flaky. Always set socket timeouts on your sockets and read/connect timeouts on HTTP calls. Consider wrapping external network calls in a retrying/circuit breaker library (see Netflix/Hystrix & rholder/guava-retrying).
  • Write code as you want to read it. Add comments where you think you will not understand your code in a year’s time. You will need the comment in a month. Somewhat related;
  • Setup you build tooling around a project so that it’s easy to get started. Document the (few) commands needed to build, run, test and package in a README file.
  • Making sure your projects can build from command line makes things so much easier down the road.
  • Handling 3rd party dependencies in many languages can be a real mess (looking at you Java and Python). Specifically when two different libraries depend on different versions. Some key things to take away from this: 1) Constantly question your dependencies. 2) Automated tests can help against this. 3) Always fixate which version of a 3rd party dependency you should use.
  • Popular Open Source projects are a great way to learn about good maintainable code and software development process.
  • Every single line you add to an application adds complexity and makes it more likely to have bugs. Removing code is a great way to remove bugs. Related;
  • Every piece of infrastructure (databases, caches, message queues etc.) your application is a source of bugs, requires maintenance & new knowledge. Not to mention that such dependencies might slow down productivity. Weigh new infrastructure against productivity carefully. Can you replace an old piece of infrastructure with your new?
  • Code paths that handles failures are rarely tested/executed (for a reason). This makes them a good candidate for bugs.
  • Input validation is not just useful for security reasons. It helps you catch bugs early.
  • Somewhat related to above: State validation and output validation can be equally useful as input validation, both in terms of discovering inherent bugs, but also for security sensitive code.
  • Code reviews are a great way to improve as a programmer. You will get critique on your code, and you will learn to describe in words why someone else’s code is good or bad. It also trains you to discover common mistakes.
  • Learning a new programming language is a great way to learn about new paradigms and question old habits.
  • Always specify encoding when converting text to and from bytes; be it when reading/writing to network, file or for encryption purposes. If you rely on your locale’s character set you are bound to run into data corruption eventually. Use a UTF character set if you can get to choose yourself.
  • Know your tools; That includes your editor, the terminal, version control system (such as git) and build tooling.
  • Learn to use your tools without a mouse. Learn as many keyboard shortcuts as possible. It will make you more efficient and is generally more ergonomic.
  • Reusing code is not an end goal and will not make your code more maintainable per se. Reuse complicated code but be aware that reusing code between two different domains might make them depend on each other more than necessary.
  • Sitting for long time by the computer can break your body. 1) Listen to what your body has to say. Think extra about your back, neck and wrists. Take breaks if your body starts to hurt. Creating a pause habit (making tea, grabing coffee) can be surprisingly good for your body and mind. 2) Rest your eyes from time to time by looking away from your screen. 3) Get a good keyboard without awkward wrist movements.
  • Automated testing, and in particular unit tests, are not just testing that your code does was it should. They also 1) document how the code is supposed to be used and 2) also helps you put yourself in the shoes of someone who will be using the code. The latter is why some claim test-first approach to development can yield cleaner APIs.
  • Test what needs to be tested. Undertesting can slow you down because of bug hunting. Overtesting can slow you down because every change requires updating too many tests.
  • Test what (outcome) is being done in an implementation, now how it’s being done. In other words, your tests should not depend on the inner nitty-gritty details of a class. A different way of looking at it is that a rewrite of how a class does something shouldn’t require changing any of the tests as long as the outcome is the same. This will simplify refactoring a lot easier.
  • Dynamic languages generally need more testing to assert they work properly than compiled languages. (Offline code analysis tools can also help.)
  • Race conditions are surprisingly more common than one generally thinks. This is because a computer generally has more TPS than we are used to.
  • Understanding the relationship between throughput and latency (http://en.m.wikipedia.org/wiki/L…) can be very useful when your systems are being optimized. Related;
  • Many times high throughput can be achieved by introducing smart batching.
  • Commit your code in small, working, chunks and write a helpful commit message that summarizes what you did and why you did it. Working commits are a prerequisite for bisecting bugs (Git – git-bisect Documentation).
  • Keep your version control system’s branches short-lived. My experience is that risk of failures increases exponentially the longer a branch lives. Avoid working on a branch for more than two weeks. For large features, break them into multiple refactoring branches to make the feature easier to implement in a few commits.
  • Know your production environment and think about deployment strategies for your change as early as possible.
  • Surprisingly, shipping code more frequently tends to reduce risk, not increase it.
  • Learning an object oriented language is easy. Mastering good object oriented design is hard. Knowing about SOLID (object-oriented design) and object-oriented Design Patterns – Wikipedia will improve your understanding of OO design.
  • It’s possible to write crappy code in a well architected system. However, in a well architected system you know that the crap is isolated and easily replaceable. Focus on a sane decoupled architecture first. The rest can be cleaned up later if on a tight schedule.
  • Bus factor can be a serious risk to your team. Be a team player: Most of your code you write will be read or modified by someone else. This includes the code you write early in a project! Document (as appropriate) and write solid commit messages from the start. Also, code reviews and scripts can help a lot in knowledge sharing. Last, but not least, do make sure you aren’t the only one sitting on secret passwords etc.

Jeff Darcy

Jeff Darcy, “ask for topic bio” was a mistake

617.2k ViewsUpvoted by Abhishek Kumar, Search Quality Engineer at Google
Answer featured in Forbes.

1. Never reveal all that you know.

OK, seriously this time.  I think there are really a few things that distinguish great programmers.

  1. Know the concepts.  Solving a problem via memory or pattern recognition is much faster than solving it by reason alone.  If you’ve solved a similar problem before, you’ll be able to recall that solution intuitively.  Failing that, if you at least keep up with current research and projects related to your own you’ll have a much better idea where to turn for inspiration.  Solving a problem “automatically” might seem like magic to others, but it’s really an application of “practice practice practice” as Miguel Paraz suggests.
  2. Know the tools.  This is not an end in itself, but a way to maintain “flow” while programming.  Every time you have to think about how to make your editor or version-control system or debugger do what you want, it bumps you out of your higher-level thought process.  These “micro-interruptions” are small, but they add up quickly.  People who learn their tools, practice using their tools, and automate things that the tools can’t do by themselves can easily be several times as productive as those who do none of those things.
  3. Manage time.  Again it comes back to flow.  If you want to write code, write code.  If you want to review a bunch of patches, review a bunch of patches.  If you want to brainstorm on new algorithms . . . you get the idea.  Don’t try to do all three together, and certainly don’t interrupt yourself with email or IRC or Twitter or Quora.  😉  Get your mind set to do one thing, then do that thing for a good block of time before you switch to doing something else.
  4. Prioritize.  This is the area where I constantly see people fail.  Every problem worth tackling has many facets.  Often, solving one part of the problem will make solving the others easier.  Therefore, getting the order right really matters.  I’m afraid there’s no simple answer for how to recognize that order, but as you gain more experience within a problem domain – practice again – you’ll develop a set of heuristics that will guide you.
  5. Reuse everything.  Reuse ideas.  Reuse code.  Every time you turn a new problem into a problem you already know how to solve – and computing is full of such opportunities – you can save time.  Don’t worry if the transformed solution isn’t absolutely perfect for the current problem.  You can refine later if you really need to, and most often you’ll find that you’re better off moving on to the next problem.

A lot of these really come down to efficiency.  As you move through more problems per day, you’ll gain more experience per day, which will let you move through more problems per day, and so on.  It’s a feedback loop; once you get on its good side, your effectiveness (and value) will increase drastically.

 

Soft Skills: The software developer’s life manual

<a  href=”http://www.amazon.com/gp/product/1617292397/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=1617292397&linkCode=as2&tag=softw05-20&linkId=BF7CMFQ5Y3OGUVJ7″><img border=”0″ src=”http://ws-na.amazon-adsystem.com/widgets/q?_encoding=UTF8&ASIN=1617292397&Format=_SL110_&ID=AsinImage&MarketPlace=US&ServiceVersion=20070822&WS=1&tag=softw05-20″ ></a><img src=”http://ir-na.amazon-adsystem.com/e/ir?t=softw05-20&l=as2&o=1&a=1617292397″ width=”1″ height=”1″ border=”0″ alt=”” style=”border:none !important; margin:0px !important;” />