既懂数学又会编程,全方位解读如何成为一名投行Quant工

2016-04-12 UniCareer

Quant是做什么的?

Quant的工作就是设计并实现金融的数学模型(主要采用计算机编程),包括衍生物定价,风险估价或预测市场行为等。所以Quant更多可看为工程师,按中国的习惯性分类方法就是理工类人才,而不是文科人才,这个和金融有一定的区别(当然金融也有很多理工的内容)。

有哪几种 Quant?

(1) Desk Quant

Desk Quant 开发直接被交易员使用的价格模型,优势是接近交易中所遇到的Money和机会,劣势是压力很大。

(2) Model Validating Quant

Model Validating Quant 独立开发价格模型,不过是为了确定Desk Quant开发的模型的正确性。优势是更轻松,压力比较小,劣势是这种小组会比较没有作为而且远离Money。

(3) Research Quant

Research Quant 尝试发明新的价格公式和模型,有时还会执行Blue-Sky Research(不太清楚是什么),优势是比较有趣(对喜欢这些人来说),而且你学到很多东西。劣势是有时会比较难证明有你这个人存在(跟科学家一样,没有什么大的成果就没人注意你)

(4) Quant Developer

其实就是名字被美化的程序员,但收入很不错而且很容易找到工作。这种工作变化很大,它可能是一直在写代码,或者调试其他人的大型系统。

(5) Statistical Arbitrage Quant

Statistical Arbitrage Quant 在数据中寻找自动交易系统的模式(就是套利系统),这种技术比起衍生物定价的技术有很大的不同,它主要用在对冲基金里,而且这种位置的回报是极不稳定的。

(6) Capital Quant

Capital Quant 建立银行的信用和资本模型,相比衍生物定价相关的工作,它没有那么吸引人,但是随着巴塞尔II银行协议的到来,它变的越来越重要,你会得到不错的收入(但不会很多),更少的压力和更少的工作时间。

人们投资金融行业就是为了赚钱,如果你想获得更多的收入,你就要更靠近那些钱的”生产”的地方,这会产生一种接近钱的看不起那些离得比较远的人的现象,作为一个基本原则,靠近钱比远离钱要来得容易。

Quant工作的领域?

(1) FX

FX就是外汇交易的简写。合同趋向于短期,大量的金额和简单的规定,所以重点在于很快速度的建立模型。

(2) Equities

Equities的意思是股票和指数的期权,技术偏向于偏微分方程(PDE)。它并不是一个特别大的市场。

(3) Fixed Income

Fixed Income的意思是基于利息的衍生物,这从市值上来说可能是最大的市场,他用到的数学会更加复杂因为从根本上来说他是多维的,技术上的技巧会用的很多,他的收入比较高。

(4) Credit Derivatives

Credit Derivatives是建立在那些公司债务还清上的衍生产品,他发展的非常快并有大量需求,所以也有很高的收入,尽管如此,他表明了一些当前经济的泡沫因素。

(5) Commodities

Commodities因为最近几年生活用品价格的普遍涨价,也成为一个发展迅速的领域。

(6) Hybrids

Hybrids是多于一个市场的衍生物市场,典型情况是利息率加上一些其它东西,它主要的优势在于可以学到多种领域的知识,这也是当前非常流行的领域。

Quant一般在哪些公司工作?

(1) 商业银行 (HSBC, RBS)

商业银行对你要求少,也给的少,工作会比较稳定。

(2) 投行 (高盛, Lehman Brothers)

投行需要大量的工作时间但工资很高,不是很稳定的工作,总的来说,美国的银行收入比欧洲银行高,但工作时间更长。

(3) 对冲基金 (Citadel Group)

对冲基金需要大量的工作时间和内容,他们也处在高速发展同时不稳定的情况中,你可能会得到大量的回报,也可能几个月后就被开除。

(4) 会计公司

大型会计公司会有自己的顾问quant团队,有些还会送他们的员工去Oxford读Master,主要的劣势在于你远离具体的行为和决策,而且厉害的人更愿意去银行,所以你比较难找到人请教。

(5) 软件公司

外包quant模型变得越来越流行,所以你去软件公司也是一个选择,劣势和会计公司比较类似。

成为一个Quant需要看哪些书?
UniCareer诚意推荐书单
 
《Options Future and Other Derivatives》
John C. Hull

不管是找工作还是senior quant都会用到。John Hull本人也是非常厉害的,各个方面都有开创性的成果。现在Toronto Uni,经典中的经典,涉猎还算广泛,不过不够数学—-人称华尔街的圣经,自然不算很难。

《Stochastic Calculus for Finance II》
Steven E. Shreve

Shreve的新书,非常elegant, 非常仔细,非常数学完备,适合数学背景, 但是比较厚,对于入门来说还是3好。作者现在CMU纽约。教授。顶尖人物。I是讲离散模型,II讲连续模型。

《Liar’s Poker》
Michael Lewis 

讲以前Solomon brothers的Arb team的,当时是世界最厉害的quant trader。这本书搞trading的人都会看。

《C++ Design Patterns and Derivatives Pricing》
Mark S. Joshi 

对于懂得C++基础的人来说很重要,更重要的是教你学会Monte Carlo。

《Modeling Derivatives in C++ (Wiley Finance)》
Justin London 
学习了一年的金工,其实就这本书最核心最实用,其他的理论书看看就好。很多理论书还有重复部分,注意区分。
《The Concepts and Practice of Mathematical Finance》
Mark S. Joshi 
这本书的目标在于覆盖一个优秀quant应该知道的知识领域,其中包括强列推荐你在应聘工作之前看的一些编程项目。
《Interest Rate Models – Theory and Practice》
Damiano Brigo / Fabio Mercurio 
评价超高的书。这本书最大的精华是关于Libor market model的论述。本书的特点是作者将所有细节和盘托出,包括大量的数值结果,这样可以方便读者自学和验证。
《Probability with Martingales》
David Williams
主要是围绕martingale展开的,前面一部份介绍必要的measure theory的部分,点到即止,都是后面基本的probability theory需要用到的。即使你之前不懂measure theory也能看懂。难怪是给undergraduate用的。Williams是这个方向上文笔最好的数学家了。
 
《Monte Carlo Methods in Financial Engineering》
Paul Glasserman
本书很实用,紧扣标题,就是围绕着金融工程中蒙特卡洛的应用展开,真正读过的人可能会有感受,此书不太适合作为first book来读,最好两方面都已经有所涉及,再来读收获更大也更舒服些。
《My Life as a Quant: Reflections on Physics and Finance》
Emanuel Derman
作者是第一代quant,以前是GS的quant 研究部门head,现在哥大。是stochastic vol领域顶尖人物,其实也是很多其他领域顶尖人物。
福利领取方式
1. 关注公众号:UniCareer2. 回复关键字:我爱读书

3. 按照提示完成操作

3个工作日内,Quant十本书福利就会送到你的邮箱啦~

成为一个Quant需要知道一些什么?

根据你想工作的地方不同,你需要学习的知识变化很大,在写着篇文章的时间(1996),我会建议将我的书全部学会就可以了。很多人错误的把学习这些知识看作仅仅看书而已,你要做的是真正的学习,就像你在准备参加一个基于这些书内容的考试,如果你对能在这个考试里拿A都没有信心的话,就不要去面试任何的工作。

面试官更在乎你对基本知识的了解是否透彻,而不是你懂得多少东西,展示你对这个领域的兴趣也很重要,你需要经常阅读Economist, FT 和Wall Street Journal,面试会问到一些基本微积分或分析的问题,例如Logx的积分是什么。问到类似Black-Scholes公式怎么得出的问题也是很正常的,他们也会问到你的论文相关的问题。

面试同样也是让你选择公司的一个机会,他们喜欢什么样的人,他们关心的是什么之类的答案可以从他们的问题中得出,如果问了很多关于C++语法的问题,那么要小心选择除非那是你想做的工作。一般来说,一个PhD对得到Quant的Offer是必需的。

有一个金融数学的Master学位会让你在银行风险或交易支持方面却不是直接Quant方面的工作,银行业变得越来越需要数学知识,所以那些东西在银行的很多领域都有帮助。

在美国,读了一个PhD之后再读一个Master变得越来越普遍,在UK这依然比较少见。

一般哪些专业对口Quant?

据观察,Quant一般的专业会是数学,物理,金融工程(金融数学)。其实虽然不是特别多,但是还是有一些投行招手Master金工的Quant,一般几个好的FE专业都有去做Quant的硕士生。

编程      

所有类型的Quant都在编程方面花费大量时间(多于一半)。尽管如此,开发新的模型本身也是很有趣的一件事,标准的实现方法是用C++。一个想成为quant的人需要学习C++,有些其他地方使用Matlab所以也是一个很有用的技能,但没C++那么重要。VBA也用的很多,但你可以在工作中掌握它。

收入   

一个Quant能赚多少?一个没有经验的Quant每年大概会挣到税前60k-100k美元。奖金的话不会太高,但是如果行情好的话,也非常的客观,一般我听说的话,刚入职第一年一般可以拿到一两万刀的奖金。如果你的工资超出这个范围,你要问自己Why?收入会迅速的增长,奖金也是总收入中一个很大的组成部分,不要太在乎开始的工资是多少,而是看重这个工作的发展机会和学习的机会。

工作时间 

一个Quant工作的时间变化很大。在RBS我们8:30上班,6pm下班。压力也是变化很大的, 一些美国银行希望你工作时间更长。 在伦敦有5-6个星期的假期,而在美国2-3个是正常的。

怎样让小孩透彻理解基本的数学概念:可汗学院数学小视频合集

这年头,但凡和教育沾点边的,还有谁不知道可汗学院啊-Khan Academy?前段时间他的故事被编成了鸡汤文,弄得我妈都知道,因为比尔盖茨都为他唱赞歌。

好吧,这个故事你多少也知道,咱们就简单回顾一下,再进入正题:

萨尔曼·可汗是美籍孟加拉国移民,毕业于麻省理工学院和哈佛大学。

萨尔曼·可汗有个小侄女叫纳迪亚,2004年她在新奥尔良上七年级,数学成绩一直不好,要求可汗给她辅导。可汗和纳迪亚不在同一个城市,于是通过互联网教纳迪亚学数学,讲得生动有趣,概念清晰,纳迪亚的数学成绩提高神速。

很快,他的朋友就知道了,也让可汗给孩子辅导数学。经过可汗辅导的孩子,数学成绩都直线上升。

可汗想,这样辅导效率太低,不如做成视频,放到互联网上,让大家免费观看。结果回到家他就躲进衣帽间里,把自己关起来,拿摄像头开始录制视频。

他的视频非常生动,基本能在十分钟内把一个数学概念讲完,在互联网上引起了很大的关注。结果一发不可收拾,他把自己关在衣帽间录制了一年的视频,从小学数学,到高中的微积分,再到大学的高等数学,统统讲了个遍,共计4800个视频。

这些视频在互联网上获得了极大的成功,点击率接近5亿,共有4800万人观看。在美国,有2万多所学校,上数学课时老师会让学生不时观看可汗的视频,看完视频后老师负责答疑。老师们说:“如果你在美国教数学,你就不可能没听说过萨尔曼·可汗。”

2011年,可汗做了一次TED演讲,比尔盖茨为他站台,和他问答互动。演讲主题是:什么才是未来的教育模式?有兴趣的朋友可以点开看看。

盖茨是可汗粉。他曾经花费很多时间教3个孩子数学和科学的基本概念,可孩子们总是听得懵懵懂懂。2010年初,有人向他推荐了可汗的网站。没想到,那些他怎么也解释不清的知识点,汗通过短短12分钟的视频,就让孩子融会贯通。

檩子很早就知道可汗和他的小视频了。看过几个,可汗的确很牛,能用一块黑板,一枝笔把抽象的数学概念通过写写画画解释得很明白。

当时很想把可汗推荐给大家,不过这些视频虽好,却是全英文的,不要说小孩,英文六级的大人看得也会很费劲。所以就算了。

这次,既然大家让推荐资料,我又去网上搜了一下,发现网易公开课里可汗的不少小视频已经做了一定程度的汉化,配上了中英文字幕。虽然还没有中文配音,但是看懂是没问题了。

目前,很多可汗的视频都在网易公开课上能找到,有中英文字幕。虽然涵盖多个科目,不过数学小视频仍然是主打。大概有40个主题。
我看了一些,感觉最实际的使用方式是这样的:家长先去看,了解一下可汗的讲解方法,然后参照可汗的方法把这个概念亲自讲给孩子听。可汗视频的价值在于可汗能用简单的语言和例子把一个抽象的概念转化成孩子能听得明白的话,讲解方式是最值得借鉴的。

可汗的视频很多,不过最经典的还是数学的基本概念讲解。檩子做了一点小功课,把可汗学院的算术与代数预备课程的视频目录找出来了,列在这里,供大家检索参考。(点击阅读原文,可以看到所有视频的链接,大家也可以去网易公开课找找)

可汗学院的 算术与代数预备课程,是从零开始学习数学的起始点,是代数课程的先导课。对于那些想从最基础开始学习数学,或者以后想要学习代数课程的幼儿园大班和小学生来说,这套课程比较适合。

1、加法与减法

主要内容:正数和负数的相加减,从1+1=2开始

[第1集] 加法1

[第2集] 加法2

[第3集] 减法1

[第4集] 减法2

[第5集] 减法3

[第6集] 加法3

[第7集] 加法4

[第8集] 减法3

[第9集] 为什么可以借代

[第10集] 加法5

[第11集] 减法4

[第12集] 减法应用题

[第13集] 交替心算减法

[第14集] 负数介绍

[第15集] 加负数

[第16集] 加不同符号的数

[第17集] 加减负数

[第18集] 两位数相加

[第19集] 加减法应用题

2、乘法与除法

主要内容:乘法与除法,正数和负数的乘除法,位值等

[第1集] 基本乘法

[第2集] 乘法表

[第3集] 10 11 12的乘法表

[第4集] 除法1

[第5集] 除法2

[第6集] 将总数平分及其应用

[第7集] 两位数乘一位数

[第8集] 两位数乘两位数

[第9集] 数字相乘及其应用2

[第10集] 格形乘法

 

3、数的性质

主要内容:交换律,结合律,分配率,恒等式等等

[第1集] 加法交换律

[第2集] 乘法交换律

[第3集] 加法结合律

[第4集] 乘法结合律

[第5集] 加法结合律性质

[第6集] 分配律性质

[第7集] 分配律性质2

[第8集] 分配律性质例1

[第9集] 数性质和绝对值

[第10集] 恒等性质1

[第11集] 恒等性质例2

[第12集] 0的恒等性质

[第13集] 加法的逆的性质

[第14集] 乘法的逆的性质

[第15集] 为什么除以0没有定义

[第16集] 为什么0除以0没有定义

[第17集] 没有定义和不确定

 

 

4、分数

主要内容:分数的计算与转换

[第1集] 分数的分子和分母

[第2集] 解释分数的意义

[第3集] 等价分数

[第4集] 等价分数例题

[第5集] 分数比较大小

[第6集] 最简分数

[第7集] 分数比较大小第2部分

[第8集] 分数排序

 

5、小数

主要内容:概念的理解,对小数的计算,转换等

[第1集] 小数加法

[第2集] 小数位值

[第3集] 小数位值2

[第4集] 用数轴表示小数

[第5集] 小数近似

[第6集] 小数估计

[第7集] 小数比较

[第8集] 小数加法

[第9集] 小数减法

[第10集] 小数减法应用题

[第11集] 小数乘以10的指数

[第12集] 小数乘法

[第13集] 小数乘法

[第14集] 小数除以10的指数

[第15集] 小数除法1

[第16集] 小数除法2

[第17集] 小数除法3

[第18集] 小数乘法3

[第19集] 小数和分数互化

[第20集] 把分数化成小数例题

[第21集] 把分数化成小数

[第22集] 把分数转换为小数例1

[第23集] 把分数转换为小数例2

[第24集] 把循环小数化成分数1

[第25集] 把循环小数化成分数2

[第26集] 把小数化成分数1例1

[第27集] 把小数化成分数1例2

[第28集] 把小数化成分数1例3

[第29集] 把小数化成分数2例1

[第30集] 把小数化成分数2例2

[第31集] 百分数和小数

[第32集] 把一个数表示成小数 百分数 分数

[第33集] 把一个数表示成小数 百分数 分数2

[第34集] 数轴上一点

[第35集] 数字排序

 

 

6、负数

主要内容包括:理解与使用负数

[第1集] 负数介绍

[第2集] 负数的大小排序

[第3集] 加负数

[第4集] 加不同符号的数

[第5集] 加减负数

[第6集] 正数和负数相乘

[第7集] 正数和负数除法

[第8集] 为什么负数乘以负数得到正数

 

7、约数与倍数

主要内容:质数,最小公倍数,最大公约数

[第1集] 质数

[第2集] 判断质数

[第3集] 判断整除

[第4集] 共同整除的例题

[第5集] 找一个数的因数

[第6集] 质因数分解

[第7集] 最小公倍数

[第8集] 最小公倍数(LCM)

[第9集] 最大公约数

[第10集] 代数表达式的最小公倍数

[第11集] 3的整除性质

8、指数

主要内容:不用代数来理解指数

 

[第1集] 理解指数

[第2集] 理解指数2

[第3集] 指数第一级

[第4集] 指数第二级

[第5集] 指数第三级

[第6集] 指数法则1

[第7集] 指数法则2

 

9、比率和比例

主要内容:不用代数来理解比率
[第1集] 比例介绍

[第2集] 理解比例

[第3集] 比例分数的最简形式

[第4集] 比例化简

[第5集] 求比例中的未知数

[第6集] 求单位速度

 

10、计算顺序

内容包括:表达式中各种计算的运算顺序

[第1集] 运算顺序介绍

[第2集] 更复杂的运算顺序例子

[第3集] 运算顺序1

[第4集] 运算顺序2

点击阅读原文,去小花生网看这些视频的播放链接。

本文由小花生编写,公众号转载须获授权

相关阅读:

花生快讯:

感谢订阅 “小花生网”

每天推送国内外最优秀的教育资源

周一:新书开团

周二:国外实用的教育方法

周三:英文原版趣味学习资源

周四:不同年龄的中文好书

周五:儿童电影/动画片/纪录片

周六:美好生活画报

周日:引发深度思考的文章

纯粹数学的雪崩效应:庞加莱猜想何以造福了精准医疗?

2016-04-12 顾险峰 赛先生


图1 庞加莱猜想电脑三维模型

顾险峰 (纽约州立大学石溪分校终身教授,清华大学丘成桐数学科学中心访问教授,计算共形几何创始人)

最近英国上议院议员马特瑞德利(Matt Ridley)在《华尔街日报》上撰文《基础科学的迷思》(The Myth of Basic Science)。他认为“科学驱动创新,创新驱动商业”这一说法基本上是错误的,反而是商业驱动了创新,创新驱动了科学,正如科学家被实际需求所驱动,而不是科学家驱动实际需求一样。总之,他认为“科学突破是技术进步的结果,而不是原因”。

瑞德利先生的言论反映了许多人对基础科学的严重误解,会给年轻学子们带来思想混乱和价值观念上的困扰,有必要加以澄清。诚然,商业需求和工程实践会为基础科学提供研究的素材,比如历史上最优传输理论(OptimalMass Transportation Theory)和蒙日-安培方程(Monge-Ampere)起源于土石方的运输,最后猜想被康塔洛维奇解决,康塔洛维奇为此获得了诺贝尔经济学奖。数年前,为了解决医学图像的压缩问题,陶哲轩提出了压缩感知(Compressive Sensing)理论。但是,从根本上而言,基础科学的源动力来自于科学家对于自然真理的好奇和对美学价值的追求。基础科学上的突破,因为揭示了自然界的客观真理,往往会引发应用科学的革命。纯粹数学的研究因为其晦涩抽象,实用价值并不明显直观,普罗大众一直倾向于认为其“无用”。但实际上,纯粹数学对应用科学的指导作用是无可替代的。

计算机科学和技术发展的一个侧面就在于将人类数千年积累的知识转换成算法,使得没有经历过职业训练的人也可以直接使用最为艰深的数学理论。在拓扑和几何领域,往往很多具有数百年历史的定理仅仅在最近才被转换成算法。但是,依随计算机技术的迅猛发展,从定理到算法的过程日益加速。很多新近发展的数学理论被迅速转换成强有力的算法,并在工程和医疗领域被广泛应用。

历史一再表明,以满足人类好奇心为出发点的基础理论研究,其本质突破往往不能引起当时人类社会的重视,宛若冰川旷谷中一声微弱的呐喊,转瞬间随风消逝,但是这一声往往会引发令天空变色,大地颤抖的雪崩。庞加莱猜想的证明就是一个鲜明的实例,虽然雪崩效应还没有被大众所察觉,但是雪崩已经不可逆转地开始了!

1  庞加莱猜想

法国数学家庞加莱(Jules Henri Poincaré)是现代拓扑学的奠基人。拓扑学研究几何体,例如流形,在连续形变下的不变性质。我们可以想象曲面由橡皮膜制成,我们对橡皮膜拉伸压缩,扭转蜷曲,但是不会撕破或粘联,那么这些形变都是连续形变,或被称之为拓扑形变,在这些形变下保持不变的量就是拓扑不变量。如果一张橡皮膜曲面经由拓扑形变得到另外一张橡皮膜曲面,则这两张曲面具有相同的拓扑不变量,它们彼此拓扑等价。如图2 所示,假设兔子曲面由橡皮膜做成,我们象吹气球一样将其膨胀成标准单位球面,因此兔子曲面和单位球面拓扑等价。

图2. 兔子曲面可以连续形变成单位球面,因此兔子曲面和球面拓扑等价。

兔子曲面无法连续形变成轮胎的形状,或者图3中的任何曲面。直观上,图5中小猫曲面有一个“洞”,或称“环柄”;图3中的曲面则有两个环柄。拓扑上,环柄被称为亏格。亏格是最为重要的拓扑不变量。所有可定向封闭曲面依照亏格被完全分类。


图3. 亏格为2的封闭曲面。亏格是曲面最重要的拓扑不变量。

庞加莱思考了如下深刻的问题:封闭曲面上的“洞”是曲面自身的内蕴性质,还是曲面及其嵌入的背景空间之间的相对关系?这个问题本身就是费解深奥的,我们力图给出直观浅近的解释。我们人类能够看到环柄形成的“洞”,是因为曲面是嵌入在三维欧式空间中的,因此这些“洞”反应了曲面在背景空间的嵌入方式,我们有理由猜测亏格反映了曲面和背景空间之间的关系。

图4. 曲面上生活的蚂蚁如何检测曲面的拓扑?

但是另一方面,假设有一只蚂蚁自幼生活在一张曲面上,从未跳离过曲面,因此从未看到过曲面的整体情形。蚂蚁只有二维概念,没有三维概念。假设这只蚂蚁具有高度发达的智力,那么这只蚂蚁能否判断它所生活的曲面是个亏格为0的拓扑球面,还是高亏格曲面?

图5. 亏格为1的曲面上,无法缩成点的闭圈。

庞加莱最终悟到一个简单而又深刻的方法来判断曲面是否是亏格为0的拓扑球面:如果曲面上所有的封闭曲线都能在曲面上逐渐缩成一个点,那么曲面必为拓扑球面。比如,我们考虑图5中小猫的曲面,围绕脖子的一条封闭曲线,在曲面上无论怎样变形,都无法缩成一个点。换言之,只要曲面亏格非零,就存在不可收缩成点的闭圈。如果流形内所有的封闭圈都能缩成点,则流形被称为是单连通的。庞加莱将这一结果向高维推广,提出了著名的庞加莱猜想:假设M是一个封闭的单连通三维流形,则M和三维球面拓扑等价。

图6. 带边界的三流形,用三角剖分表示。

在现实世界中,无法看到封闭的三维流形:正如二维封闭曲面无法在二维平面上实现,三维封闭流形无法在三维欧式空间中实现。图6显示了带有边界的三维流形,例如实心的兔子和实心的球体拓扑等价。这些三维流形用三角剖分来表示,就是用许多四面体粘合而成。如图所示,体的三角剖分诱导了其二维边界曲面的三角剖分。实心球实际上是三维拓扑圆盘,我们可以将两个三维拓扑圆盘沿着边界粘合,就得到三维球面,恰如我们可以将两个二维拓扑圆盘沿着边界粘合而得到二维球面一样。当然,这已经超出人们的日常生活经验。

2  曲面单值化定理

近百年来,庞加莱猜想一直是拓扑学最为基本的问题,无数拓扑学家和几何学家为证明庞加莱猜想而鞠躬尽瘁死而后已。相比那些最后成功的幸运儿,众多默默无闻,潦倒终生的失败者更加令人肃然起敬。老顾曾经访问过吉林大学数学学院,听闻了有关何伯和教授的生平事迹。何教授终生痴迷于庞加莱猜想的证明,苦心孤诣,废寝忘食,愈挫愈奋,九死不悔,直至生命终结,对于庞加莱猜想依然念念不忘。何教授绝对不是为了任何实用价值或者商业利益而奋斗终生的,而是为了对于自然界奥秘的好奇,对于美学价值的热切追求,这种纯粹和崇高,是人类进步的真正动力!


图7. 人脸曲面上连接两点的测地线。

作为拓扑学最为基本的问题,庞加莱猜想的本质突破却来自于几何。给定一个拓扑流形,如给定图6中四面体网格的组合结构,我们可以为每条边指定一个长度,使得每个四面体都是一个欧式的四面体,这样我们就给出了一个黎曼度量。所谓黎曼度量,就是定义在流形上的一种数据结构,使得我们可以确定任意两点间的最短测地线。图7显示了人脸曲面上的两条测地线。黎曼度量自然诱导了流形的曲率。曲率是表征空间弯曲的一种精确描述。给定曲面上三个点,我们用测地线连接它们成一个测地三角形。如果曲面为欧几里德平面,那么测地三角形内角和为180度。球面测地三角形的内角和大于180度,马鞍面的测地三角形的内角和小于180度。测地三角形内角和与180度的差别就是三角形的总曲率。那么,给定一个拓扑流形,我们能否选择一个最为简单的黎曼度量,使得曲率为常数呢?

图8. 曲面单值化定理,所有封闭曲面都可以保角地形变成常曲率曲面。

这一问题的答案是肯定的,这就是曲面微分几何中最为根本的单值化定理。单值化定理是说大千世界,各种几何形状千变万化,但是无论它们如何变化,都是万变不离其宗:所有的曲面都可以共形地变换成三种标准曲面中的一种,单位球面,欧几里德平面和双曲平面。标准空间对应着常数值曲率,+1,0和-1,如图8所示。所谓共形变换,就是保持局部形状的变换,局部上看就是相似变换。相似变换保持角度不变,因此共形变换也被称为是保角变换。图9显示了从曲面到平面的一个共形变换。单值化定理断言了所有封闭曲面可以配有三种几何中的一种:球面几何,欧氏几何和双曲几何。曲面微分几何中几乎所有的重要定理都绕不过单值化定理。


图9. 共形变换保持局部形状。

3  瑟斯顿几何化猜想

为了证明庞加莱猜想,菲尔兹奖得主瑟斯顿推广了单值化定理到三维流形情形。任何三维流形,都可以经历一套标准手续分解成一系列的最为简单的三维流形,即所谓的素流形。素流形本身无法被进一步分解,同时这种分解本质上是唯一的。瑟斯顿提出了石破天惊的几何化猜想:所有的素三维流形可以配有标准黎曼度量,从而具有8种几何中的一种。特别地,单连通的三维流形可被配有正的常值曲率度量,配有正的常值曲率的3维流形必为3维球面。因此庞加莱猜想是瑟斯顿几何化猜想的一个特例。

图10. 瑟斯顿的苹果,几何化猜想。

图10显示了瑟斯顿几何化的一个实例。假设我们有一个苹果,三只蛀虫蛀蚀了三条管道,如左帧所示,这样我们得到了一个带边界的三维流形。根据几何化纲领,这个被蛀蚀的苹果内部容许一个双曲黎曼度量,使得其边界曲面的曲率处处为-1。我们将配有双曲度量的苹果周期性地嵌在三维双曲空间之中,得到右帧所示图形。

4  哈密尔顿的里奇曲率

本质的突破来自于哈密尔顿的里奇曲率流(Hamilton’s Ricci Flow)。哈密尔顿的想法来自经典的热力学扩散现象。假设我们有一只铁皮兔子,初始时刻兔子表面的温度分布并不均匀,依随时间流逝,温度渐趋一致,最后在热平衡状态,温度为常数。哈密尔顿设想:如果黎曼度量依随时间变化,度量的变化率和曲率成正比,那么曲率就像温度一样扩散,逐渐变得均匀,直至变成常数。如图11所示,初始的哑铃曲面经由曲率流,曲率变得越来越均匀,最后变成常数,曲面变成了球面。

图11. 曲率流使得曲率越来越均匀,直至变成常数,曲面变成球面。

在二维曲面情形,哈密尔顿和Ben Chow证明了曲率流的确将任何一个黎曼度量形变成常值曲率度量,从而给出了曲面单值化定理的一个构造性证明。但是在三维流形情形,里奇曲率流遇到了巨大的挑战。在二维曲面情形,在曲率流过程中,在任意时刻,曲面上任意一点的曲率都是有限的;在三维流形情形,在有限时间内,流形的某一点处,曲率有可能趋向于无穷,这种情况被称为是曲率爆破(blowup),爆破点被称为是奇异点(singularity)。

如果发生曲率爆破,我们可以将流形沿着爆破点一切两半,然后将每一半接着实施曲率流。如果我们能够证明在曲率流的过程中,曲率爆破发生的次数有限,那么流形被分割成有限个子流形,每个子流形最终变成了三维球面。如果这样,原来流形由有限个球粘合而成,因而是三维球面,这样就证明了庞加莱猜想。由此可见,对于奇异点的精细分析成为问题的关键。哈密尔顿厘清了大多数种类奇异点的情况,佩雷尔曼解决了剩余的奇异点种类。同时,佩雷尔曼敏锐地洞察到哈密尔顿的里奇流是所谓熵能量的梯度流,从而将里奇流纳入了变分的框架。佩雷尔曼给出了证明的关键思想和主要梗概,证明的细节被众多数学家进一步补充完成。至此,瑟斯顿几何化猜想被完全证明,庞加莱猜想历经百年探索,终于被彻底解决。

5 庞加莱猜想带来的计算技术

庞加莱猜想本身异常抽象而枯燥:单连通的闭3-流形是三维球面,似乎没有任何实用价值。但是为了证明庞加莱猜想,人类发展了瑟斯顿几何化纲领,发明了哈密尔顿的里奇曲率流,深刻地理解了三维流形的拓扑和几何,将奇异点的形成过程纳入了数学的视野。这些基础数学上的进展,必将引起工程科学和实用技术领域的“雪崩”。比如,里奇曲率流技术实际上给出了一种强有力的方法,使得我们可以用曲率来构造黎曼度量

里奇曲率流属于非线性几何偏微分方程,里奇流的方法实际上是典型的几何分析方法,即用偏微分方程的技术来证明几何问题。几何分析由丘成桐先生创立,庞加莱猜想的证明是几何分析的又一巨大胜利。当年瑟斯顿提倡用相对传统的拓扑和几何方法,例如泰西米勒理论和双曲几何理论来证明,也有数学家主张用相对组合的方法来证明,最终还是几何分析的方法拔得头筹。

哈密尔顿的里奇流是定义在光滑流形上的,在计算机的表示中,所有的流形都被离散化。因此,我们需要建立一套离散里奇流理论来发展相应的计算方法。历经多年的努力,笔者和合作者们建立了离散曲面的里奇曲率流理论,证明了离散解的存在性和唯一性。因为几乎所有曲面微分几何的重要问题,都无法绕过单值化定理。我们相信离散曲率流的计算方法必将在工程实践中发挥越来越重要的作用 [1]


图12. 离散里奇流计算的带边曲面单值化。

图8和图12显示了离散里奇流算出的封闭曲面和带边界曲面的单值化。本质上,这两幅图统摄了现实生活中所有可能的曲面,它们都被共形地映到了三种标准曲面上,球面、欧氏平面和双曲平面。这意味着,如果我们发明了一种新的几何算法,适用于这三种标准曲面,那么这一算法也适用于所有曲面。因此,离散曲率流的技术极大地简化了几何算法设计。

6 精准医疗

庞加莱猜想所诱发的离散曲率流方法被广泛应用于精准医疗领域。人体的各种器官本质上都是二维曲面或三维流形,曲率流方法对于这些器官几何特征的分析和比较起到了不可替代的作用。


图13. 虚拟肠镜技术。

虚拟肠镜 

直肠癌是男子的第四号杀手,仅在心脑血管疾病之后。中年之后,每个人都会自然长出直肠息肉,息肉会逐年增长,如果息肉直径达到一定尺寸,由于摩擦息肉会发生溃疡,长期溃疡会导致癌变。但是直肠息肉的生长非常缓慢, 一般从息肉出现直到临界尺寸需要七八年,因此对息肉的监控对于预防直肠癌起着至关重要的作用。中年人应该每两年做一次肠镜检查。传统的肠镜检查方法需要对受检者全身麻醉,将光学内窥镜探入直肠。老年人肠壁比较薄弱,容易产生并发症。同时,肠壁上有很多皱褶,如果息肉隐藏在皱褶中,医生会无法看到而产生漏检。

近些年来,北美和日本采用了虚拟肠镜技术。受检者的直肠图像由断层扫描技术来获取,直肠曲面可以从图像重建出来,如图14所示。我们需要将直肠展开摊平,从而使所有皱褶暴露出来,以便于寻找息肉和测量它们的尺寸。同时,如图13所示,在检查中同一受检者的直肠被扫描两次,每次扫描都是采用不同的姿态。直肠曲面柔软而富有弹性,不同的扫描得到的曲面之间相差很大的弹性形变。我们需要在两张曲面间建立光滑双射。在两张三维曲面间建立映射相对困难,当我们将曲面摊开展平成平面长方形后,在平面区域间计算映射会简单很多。将直肠曲面摊开展平等价于为曲面赋上一个曲率处处为0的黎曼度量,我们可以直接应用里奇曲率流的算法加以实现,如图14所示。

图14. 用里奇曲率流将直肠曲面摊开展平。

目前,虚拟肠镜技术在北美和日本被广泛采用,(但在中国还没有普及),主要是因为这种方法可以提高安全性,降低漏检率,降低人力成本。虚拟肠镜技术的普及极大地提高了早期直肠癌的发现几率,降低了直肠癌的死亡率,为人类的健康事业做出了巨大贡献。


图15. 虚拟膀胱镜。

同样的方法可以用于膀胱等其他器官,如图15所示。膀胱癌的最主要特征是膀胱壁变厚,同时内壁不再光滑,出现菜花状的几何纹理。这些症状可以用虚拟膀胱镜的方法定量得到。传统膀胱镜的方法病人需要忍受很大的痛苦,虚拟膀胱镜的方法极大地减轻了病患的疼痛,因而具有很大的优势。


图16. 用里奇曲率流将大脑皮层曲面共形映到单位球面,以便于对照比较。

脑神经疾病的预防诊断

脑退化症(Alzheimer’s disease,俗称老年痴呆症),癫痫,儿童自闭症等脑神经疾病严重地威胁着人类的健康安全。对于这些疾病的预防和诊断具有重要的现实意义。通过核磁共振成像技术,我们能够获取人类的大脑皮层曲面,如图16所示。大脑皮层曲面的几何非常复杂,有大量的皱褶沟回结构,并且这些几何结构因人而异,依随年龄变化而变化。例如老年痴呆症往往伴随大脑皮层一部分区域的萎缩。为了监控病情的发展,我们需要每隔几个月就扫描一下病人的大脑,然后将不同时期得到的大脑皮层曲面进行精确地对比。在三维空间中直接对比难度很高,我们非常容易将不同的沟回错误地对齐,算法落入在局部最优陷阱中。如图16所示,我们将大脑皮层曲面共形地映到球面上,然后在球面之间建立光滑映射,这种方法更加简单而精确。将大脑皮层映到球面等价于为大脑皮层曲面赋以曲率为+1的黎曼度量,我们可以用里奇曲率流的方法得到。


图17. 大脑海马体的几何分析。

如果说大脑皮层是数据库,那么海马体就是数据库的索引,如图17所示。如果海马体发生病变,长期记忆就无法形成,同时大脑中的长期记忆也无法被取出。很多神经疾病都能够引起海马体的变形,例如癫痫、吸毒、脑退化症等等。对海马体的几何形状进行定量比较分类是非常重要的。一种精确的方法是将海马体共形映到单位球面上,则面积的变化率给出了初始黎曼度量的全部信息,再加上平均曲率,那么海马体的所有几何信息被完美保留。换言之,我们将海马体曲面转换为球面上的两个函数(面积变化率,平均曲率)。在球面上比较不同的海马体曲面,从而精确衡量曲面之间的相似程度,进行分类诊断。相比于传统方法,这种基于几何的诊断方法更加定量而精确。

图18. 人脸曲面的精确匹配。

美容技术

在美容手术领域中,术后效果评估是重要的一个环节,这需要将术前和术后的人脸曲面进行精确的匹配。如图18所示,我们扫描了同一个人的带有不同表情的两张人脸曲面,然后在人脸曲面之间建立了精确的双射。平静表情的人脸上每一个小圆映到微笑表情的人脸上对应的小椭圆,由此我们可以测量对应点的几何变化。因此,三维人脸曲面间精确映射是美容领域中至关重要的技术。


图19. 三维人脸曲面被共形地映到二维平面上,所用方法就是里奇曲率流。

如图19所示,我们用里奇曲率流方法,将人脸曲面的黎曼度量变成曲率为0的平直度量,将三维人脸曲面平铺到二维平面上面,然后在二维平面区域之间建立光滑双射,从而诱导三维人脸曲面间的双射。当然,这种方法也可以用于三维人脸识别,但是人脸识别对于映射的精确度要求没有如此之高。

在精准医疗的其他领域,例如牙齿整形、人造心脏瓣膜、人造骨骼、放射治疗实时监控、肝脏手术计划等等,都需要对各种人体器官进行影像获取、几何重建、特征分析等,里奇流方法都会起到重要的作用。

7 总结和展望

庞加莱猜想本身纯粹而抽象:单连通的闭三维流形是三维球面,这一猜想本身似乎并没有任何实用价值。其结论的简单直观,往往给非数学专业人员无病呻吟之感。但是纯粹数学所追求的严密性迫使无数拓扑和几何学家们前仆后继,奉献终身,终于在众多数学家的共同努力下完成了证明。二维曲面的几何化定理——单值化定理从理论证明到算法提出,历经了百年;三维流形的瑟斯顿几何化纲领从理论证明到算法提出,几乎是同时。三维流形的拓扑理论和计算理论一开始就深刻地纠缠在一起。这表明,在现代,依随计算机技术的发展,纯粹理论到应用算法的开发周期越来越短。

同时,我们看到,在证明庞加莱猜想的过程中,瑟斯顿的几何化纲领将三维流形的风景一览无遗,哈密尔顿的里奇流给出从曲率来构造黎曼度量的强有力的工具,哈密尔顿和佩雷尔曼的奇点演化理论使得原来理论的禁区被彻底打破。笔者和许多数学家发展了离散里奇流的理论和算法,并且系统性地将曲率流应用到许多工程和医疗领域。在实践中,我们深深体会到,在许多关键的应用中,曲率流的方法无法被其它任何方法所取代。目前在社会实践中,里奇流在二维曲面上的应用已经开始逐步展开。但是里奇流在三维流形上的应用更为深邃奥妙,强悍有力,目前还没有任何实际应用。一方面因为三维流形远远超越日常生活经验,另一方面也是因为和曲面微分几何相比,三维流形的拓扑和几何知识远未普及。但是作为自然真理的忠实刻画,迟早三维流形的拓扑和几何会在社会实践中大行其道。庞加莱猜想所引发的雪崩效应终究会改写历史进程。

当庞加莱提出他的拓扑猜想,瑟斯顿洞察三维流形的基本几何结构,哈密尔顿悟出里奇曲率流,佩雷尔曼看出哈密尔顿的曲率流本质上是所谓熵能量的梯度流,他们所追求的是体悟几何结构的壮美,和自然真理的深邃。他们绝不会将实用价值作为其终极目的。实用技术的积累往往只能带来进化(Evolutio),好奇心的满足却能真正带来革命(Revolution)。愿更多的年轻人在万丈红尘中,在浮躁喧嚣中,能够保持诚挚纯真,保持强烈好奇,保持对自然界美丽的敏感,保持对科学真理的激情!

(如果读者对于离散曲面里奇流的理论、算法和应用有兴趣,可以进一步查阅专著【1】)

参考文献

[1] W. Zeng and X. Gu, Ricci Flow for Shape Analysis and Surface Registration Theories, Algorithms and Applications, Springer 2013

延伸阅读

  如果没有基础科学,我们将会失去什么?②  视频 | 拓扑为何?

③  纯粹数学走出象牙塔:丘成桐和三维科技有何关系?

15 Must Read Books for Entrepreneurs in Data Science

Source: http://www.analyticsvidhya.com/blog/2016/04/15-read-books-entrepreneurs-data-science/

,

Introduction

The roots of entrepreneurship are old. But, the fruits were never so lucrative as they have been recently. Until 2010, not many of us had heard of the term ‘start-up’. And now, not a day goes by when business newspapers don’t quote them. There is sudden gush in the level of courage which people possess.

Today, I see 1 out of 5 person talking about a new business idea. Some of them even succeed too in establishing their dream company. But, only the determined ones sustain. In data science, the story is bit different.

The success in data science is mainly driven by knowledge of the subject. Entrepreneurs are not required to work at ground level, but must have sound knowledge of how it is being done. What algorithms, tools, techniques are being used to create products & services.

In order to gain this knowledge, you have two ways:

  1. You work for 5-6 years in data science, get to know things around and then start your business.
  2. You start reading books along the way and become confident to start in first few years.

I would opt for second option.

15 must read books for entrepreneurs in data science

Why read books ?

Think of our brain as a library. And, it’s a HUGE library.

How would an empty library look like? If I close my eyes and imagine, I see dust, spider webs, brownian movement of dust particles and darkness. If this imagination horrifies you, then start reading books.

The books listed below gives immense knowledge and motivation in technology arena. Reading these books will give you the chance to live many different entrepreneurial lives. Take them one by one. Don’t get overwhelmed. I’ve displayed a mix of technical and motivational books for entrepreneurs in data science. Happy Reading!

List of Books

Data Science For Businessdata science for business vidhya

This book is written by Foster Provost & Tom Fawcett. It gives a great head start to anyone, who is serious about doing business with big data analytics. It makes you believe, data is now business. No business in the world, can now sustain without leveraging the power of data. This books introduces you to real side of data analysis principles and algorithms without technical stuff. It gives you enough intuition and confidence to lead a team of data scientists and recommend what’s required. More importantly, it teaches you the winning approach to become a master at business problem solving.

Get the book: Buy Now

Big Data at Workb2

This book is written by Thomas H. Davenport. It reveals the increasing importance of big data in organizations. It talks with interesting  numbers, researches and statistics. So until 2009, companies worked on data samples. But with advent of powerful devices and data storage capabilities, companies now work on whole data. They don’t want to miss even a single bit of information. This book unveils the real side of big data, it’s influence on our daily lives, on companies and our jobs. As an entrepreneur, it is extremely important for you understand big data and its related terminologies.

Get the book: Buy Now

Lean Analyticsb3

This book is written by Alistair Croll and Benjamin Yoskovitz. It’s one of the most appreciated books on data startups. It consist of practical & detailed researches, advice, guidance which can help you to build your startup faster. It gives enough intuition to build data driven products and market them. The language is simple to understand. There are enough real world examples to make you believe, a business needs data analytics like a human needs air. To an entrepreneur, this will introduce the practical side of product development and what it takes to succeed in a red ocean market.

Get the book: Buy Now

Moneyballb4

This book is written by Michael Lewis. It’s a brilliant tale which sprinkles some serious inspiration. A guy named billy bean does what most of the world failed to imagine, just by using data and statistics. He paved the path to victory when situations weren’t favorable. Running a business needs continuous motivation. This can be a good place to start with. However, this book involves technical aspects of baseball. Hence, if you don’t know baseball, chances are you might struggle in initial chapters. A movie also has been made on this book. Do watch it!

Get the book: Buy Now

Elon Muskb5

This book is written by Ashlee Vance. I’m sure none of us are fortunate to live the life of Elon Musk, but this book let’s us dive in his life and experience rise of fantastic future. Elon is the face behind Paypal, Tesla and SpaceX. He has dreamed of making space travel easy and cheap. Recently, he was applauded by Barack Obama for the successful landing of his spaceship in an ocean. People admire him. They want to know his secrets and this is where you can look for. As on entrepreneur, you will learn about must have ingredients which you need to a become successful in technology space.

Get the book: Buy Now

Keeping up with the Quantsb6

This book is written by Thomas H Davenport and Jinho Kim. As we all know, data science is driven by numbers & maths (quants). Inspired from moneyball, this book teaches you the methods of using quantitative analysis for decision making. An entrepreneur is a terminal of decision making. One must learn to make decisions using numbers & analysis, rather than intuition. The language of this book is easy to understand and suited for non-maths background people too. Also, this book will make you comfortable with basics statistics and quantitative calculations in the world of business.

Get the book: Buy Now

The Signal and the NoiseCover of the book 'The Signal and the Noise' by Nate Silver. Published by The Penguin Press

The author of this book is Nate Silver, the famous statistician who correctly predicted US Presidential elections in 2012. This books shows the real art and science of making predictions from data. This art involves developing the ability to filter out noise and make correct predictions. It includes interesting examples which conveys the ultimate reason behind success and failure of predictions. With more and more data, predictions have become prone to noise errors. Hence, it is increasingly important to understand the science behind making predictions using big data science. The chapters of this book are interesting and intuitive.

Get the book: Buy Now

When Genius Failedb8

This book is written by Roger Lowenstein. It is an epic story of rise and failure of a hedge fund. For an entrepreneur, this book has ample lessons on investing, market conditions and capital management. It’s a story of a small bank, which used quantitative techniques for bond pricing throughout the world and ensured every invested made gives a profitable results. However, they didn’t sustain for long. Their quick rise was succeeded by failure. And, the impact of their failure was so devastating that US Federal bank stepped in to rescue the bank, because the fund’a bankruptcy would have large negative influence on world’s economy.

Get the book: Buy Now

Lean Startupb9

This book is written by Eric Ries. In one line, it teaches how to not to fail at the start of your business. It reveals proven strategies which are followed by startups around the world. It has abundance of stories to make you walk on the right path. An entrepreneur should read it when he/she feel like draining out of motivation. It teaches to you to learn quickly, implement new methods and act quickly if something doesn’t work out. This book applies to all industries and is not specific to data science.

Get the book: Buy Now

Web Analytics 2.0

b10This book is written by Avinash Kaushik. It is one of the best book to learn about web analytics. Internet is the fastest mode of collecting data. And, every entrepreneur must learn the art of internet accounting. Most of the businesses today face the challenge of weak presence on social media and internet platforms. Using various proven strategies and actionable insights, this book helps you to solve various challenges which could hamper your way. It also provides a winning template which can be applied in most of the situations. It focuses on choosing the right metric and ways to keep them in control.

Get the book: Buy Now

Predictive Analyticsb11

This book is written by Eric Seigel. It is a good follow up book after web analytics 2.0. So, once you’ve understood the underlying concept of internet data, metrics and key strategies. This book teaches you the methods of using that knowledge to make predictions. It’s simple to understand and covers many interesting case studies displaying how companies predict our behavior and sell us products. It doesn’t cover technical aspects, but explains the general working on predictive analytics and its applications. You can also check out this funny rap video by Dr. Eric Seigel:

Get the book: Buy Now

Freakonomicsb12

This book is written by Steven D Levitt and Stephen J Dubner. It shows the importance of numbers, data, quantitative analysis using various interesting stories. It says, there is a logic is everything which happens around us. Reading this book will make you aware of the unexplored depth at which data affects our real lives. It draws interesting analogy between school teachers and sumo wrestlers. Also, the bizarre stories featuring cases of criminal acts, real-estate, drug dealers will certainly add up to your exciting moments.

Get the book: Buy Now

Founders at Workb13

This book is written by Jessica Livingston. Again, this isn’t data science specific but a source of motivation to get you moving forward. It’s a collection of interviews with the founders of various startups across the world. The focus has been kept on early days i.e. how did they act when they started. This book will give you enough proven ideas, strategies and lessons to anticipate and avoid pitfalls in your initial days of business. It consist of stories by Steve Wozniak (Apple), Max Levchin (Paypal), Caterina Fake (Flikr) and many more. In total, there are 32 interviews listed which means you have the chance to learn from 32 mentors in one single book. Must read for entrepreneurs.

Get the book: Buy Now

Bootstrapping a Businessb14

This book is written by Greg Gianforte and Marcus Gibson. It teaches about the things to do when you are running short of money and still don’t want to stop. This is a must read book for every entrepreneur. Considering the amount of investment required in data science startups, this book should have a special space in an entrepreneur’s heart. It reveals various eye opening truths and strategies which can help you build a great company. Greg and Marcus proves that money is not always the reason for startup failure, it’s all about founder’s perspective. This book has stories of success and failures, again a great chance for you to live many lives by reading this book.

Get the book: Buy Now

Analytics at Workb15

This book is written by Thomas H Davenport, Jeanne G Harris and Robert Morrison. This books reveals the increased use of analytical tools & concepts by managers to make informed business decisions. The decision making process has accelerated. For a greater impact, it also consists of examples from popular companies like hotels.com, best buy and many more. It talks about recruiting, coordination with people and the use of data and analytics at an enterprise level. Many of us are aware of data and analytics. But, only a few know how to use them together. This quick book has it all !

Get the book: Buy Now

End Notes

This marks the end of this list. While compiling this list, I realized most of these books are about sharing experience and learning from the mistake of others. Also, it is immensely important to posses quantitative ability to become good in data science. I would suggest you to make a reading list and stick to it throughout the year. You can take up any book to start. I’d suggest to start with a motivational book.

Have you read any other book ? What were your key takeaways? Did you like reading this article? Do share your knowledge & experiences in the comments below.

Essentials of Machine Learning Algorithms (with Python and R Codes)

Source: http://www.analyticsvidhya.com/blog/2015/08/common-machine-learning-algorithms/

Introduction

Google’s self-driving cars and robots get a lot of press, but the company’s real future is in machine learning, the technology that enables computers to get smarter and more personal.

– Eric Schmidt (Google Chairman)

We are probably living in the most defining period of human history. The period when computing moved from large mainframes to PCs to cloud. But what makes it defining is not what has happened, but what is coming our way in years to come.

What makes this period exciting for some one like me is the democratization of the tools and techniques, which followed the boost in computing. Today, as a data scientist, I can build data crunching machines with complex algorithms for a few dollors per hour. But, reaching here wasn’t easy! I had my dark days and nights.

 

Who can benefit the most from this guide?

What I am giving out today is probably the most valuable guide, I have ever created.

The idea behind creating this guide is to simplify the journey of aspiring data scientists and machine learning enthusiasts across the world. Through this guide, I will enable you to work on machine learning problems and gain from experience. I am providing a high level understanding about various machine learning algorithms along with R & Python codes to run them. These should be sufficient to get your hands dirty.

machine learning algorithms, supervised, unsupervised

I have deliberately skipped the statistics behind these techniques, as you don’t need to understand them at the start. So, if you are looking for statistical understanding of these algorithms, you should look elsewhere. But, if you are looking to equip yourself to start building machine learning project, you are in for a treat.

 

Broadly, there are 3 types of Machine Learning Algorithms..

1. Supervised Learning

How it works: This algorithm consist of a target / outcome variable (or dependent variable) which is to be predicted from a given set of predictors (independent variables). Using these set of variables, we generate a function that map inputs to desired outputs. The training process continues until the model achieves a desired level of accuracy on the training data. Examples of Supervised Learning: Regression, Decision Tree, Random Forest, KNN, Logistic Regression etc.

 

2. Unsupervised Learning

How it works: In this algorithm, we do not have any target or outcome variable to predict / estimate.  It is used for clustering population in different groups, which is widely used for segmenting customers in different groups for specific intervention. Examples of Unsupervised Learning: Apriori algorithm, K-means.

 

3. Reinforcement Learning:

How it works:  Using this algorithm, the machine is trained to make specific decisions. It works this way: the machine is exposed to an environment where it trains itself continually using trial and error. This machine learns from past experience and tries to capture the best possible knowledge to make accurate business decisions. Example of Reinforcement Learning: Markov Decision Process

List of Common Machine Learning Algorithms

Here is the list of commonly used machine learning algorithms. These algorithms can be applied to almost any data problem:

  1. Linear Regression
  2. Logistic Regression
  3. Decision Tree
  4. SVM
  5. Naive Bayes
  6. KNN
  7. K-Means
  8. Random Forest
  9. Dimensionality Reduction Algorithms
  10. Gradient Boost & Adaboost

1. Linear Regression

It is used to estimate real values (cost of houses, number of calls, total sales etc.) based on continuous variable(s). Here, we establish relationship between independent and dependent variables by fitting a best line. This best fit line is known as regression line and represented by a linear equation Y= a *X + b.

The best way to understand linear regression is to relive this experience of childhood. Let us say, you ask a child in fifth grade to arrange people in his class by increasing order of weight, without asking them their weights! What do you think the child will do? He / she would likely look (visually analyze) at the height and build of people and arrange them using a combination of these visible parameters. This is linear regression in real life! The child has actually figured out that height and build would be correlated to the weight by a relationship, which looks like the equation above.

In this equation:

  • Y – Dependent Variable
  • a – Slope
  • X – Independent variable
  • b – Intercept

These coefficients a and b are derived based on minimizing the sum of squared difference of distance between data points and regression line.

Look at the below example. Here we have identified the best fit line having linear equation y=0.2811x+13.9. Now using this equation, we can find the weight, knowing the height of a person.

Linear_Regression

Linear Regression is of mainly two types: Simple Linear Regression and Multiple Linear Regression. Simple Linear Regression is characterized by one independent variable. And, Multiple Linear Regression(as the name suggests) is characterized by multiple (more than 1) independent variables. While finding best fit line, you can fit a polynomial or curvilinear regression. And these are known as polynomial or curvilinear regression.

Python Code

#Import Library
#Import other necessary libraries like pandas, numpy...
from sklearn import linear_model
#Load Train and Test datasets
#Identify feature and response variable(s) and values must be numeric and numpy arrays
x_train=input_variables_values_training_datasets
y_train=target_variables_values_training_datasets
x_test=input_variables_values_test_datasets
# Create linear regression object
linear = linear_model.LinearRegression()
# Train the model using the training sets and check score
linear.fit(x_train, y_train)
linear.score(x_train, y_train)
#Equation coefficient and Intercept
print('Coefficient: \n', linear.coef_)
print('Intercept: \n', linear.intercept_)
#Predict Output
predicted= linear.predict(x_test)

R Code

#Load Train and Test datasets
#Identify feature and response variable(s) and values must be numeric and numpy arrays
x_train # Train the model using the training sets and check score
linear <- lm(y_train ~ ., data = x)
summary(linear)
#Predict Output
predicted= predict(linear,x_test) 

 

2. Logistic Regression

Don’t get confused by its name! It is a classification not a regression algorithm. It is used to estimate discrete values ( Binary values like 0/1, yes/no, true/false ) based on given set of independent variable(s). In simple words, it predicts the probability of occurrence of an event by fitting data to a logit function. Hence, it is also known as logit regression. Since, it predicts the probability, its output values lies between 0 and 1 (as expected).

Again, let us try and understand this through a simple example.

Let’s say your friend gives you a puzzle to solve. There are only 2 outcome scenarios – either you solve it or you don’t. Now imagine, that you are being given wide range of puzzles / quizzes in an attempt to understand which subjects you are good at. The outcome to this study would be something like this – if you are given a trignometry based tenth grade problem, you are 70% likely to solve it. On the other hand, if it is grade fifth history question, the probability of getting an answer is only 30%. This is what Logistic Regression provides you.

Coming to the math, the log odds of the outcome is modeled as a linear combination of the predictor variables.

odds= p/ (1-p) = probability of event occurrence / probability of not event occurrence
ln(odds) = ln(p/(1-p))
logit(p) = ln(p/(1-p)) = b0+b1X1+b2X2+b3X3....+bkXk

Above, p is the probability of presence of the characteristic of interest. It chooses parameters that maximize the likelihood of observing the sample values rather than that minimize the sum of squared errors (like in ordinary regression).

Now, you may ask, why take a log? For the sake of simplicity, let’s just say that this is one of the best mathematical way to replicate a step function. I can go in more details, but that will beat the purpose of this article.

Logistic_RegressionPython Code

#Import Library
from sklearn.linear_model import LogisticRegression
#Assumed you have, X (predictor) and Y (target) for training data set and x_test(predictor) of test_dataset
# Create logistic regression object
model = LogisticRegression()
# Train the model using the training sets and check score
model.fit(X, y)
model.score(X, y)
#Equation coefficient and Intercept
print('Coefficient: \n', model.coef_)
print('Intercept: \n', model.intercept_)
#Predict Output
predicted= model.predict(x_test)

R Code

x # Train the model using the training sets and check score
logistic <- glm(y_train ~ ., data = x,family='binomial')
summary(logistic)
#Predict Output
predicted= predict(logistic,x_test)

 

Furthermore..

There are many different steps that could be tried in order to improve the model:

 

3. Decision Tree

This is one of my favorite algorithm and I use it quite frequently. It is a type of supervised learning algorithm that is mostly used for classification problems. Surprisingly, it works for both categorical and continuous dependent variables. In this algorithm, we split the population into two or more homogeneous sets. This is done based on most significant attributes/ independent variables to make as distinct groups as possible. For more details, you can read: Decision Tree Simplified.

IkBzK

source: statsexchange

In the image above, you can see that population is classified into four different groups based on multiple attributes to identify ‘if they will play or not’. To split the population into different heterogeneous groups, it uses various techniques like Gini, Information Gain, Chi-square, entropy.

The best way to understand how decision tree works, is to play Jezzball – a classic game from Microsoft (image below). Essentially, you have a room with moving walls and you need to create walls such that maximum area gets cleared off with out the balls.

download

So, every time you split the room with a wall, you are trying to create 2 different populations with in the same room. Decision trees work in very similar fashion by dividing a population in as different groups as possible.

MoreSimplified Version of Decision Tree Algorithms

Python Code

#Import Library
#Import other necessary libraries like pandas, numpy...
from sklearn import tree
#Assumed you have, X (predictor) and Y (target) for training data set and x_test(predictor) of test_dataset
# Create tree object 
model = tree.DecisionTreeClassifier(criterion='gini') # for classification, here you can change the algorithm as gini or entropy (information gain) by default it is gini  
# model = tree.DecisionTreeRegressor() for regression
# Train the model using the training sets and check score
model.fit(X, y)
model.score(X, y)
#Predict Output
predicted= model.predict(x_test)

R Code

library(rpart)
x # grow tree 
fit y_train ~ ., data = x,method="class")
summary(fit)
#Predict Output 
predicted= predict(fit,x_test)

 

4. SVM (Support Vector Machine)

It is a classification method. In this algorithm, we plot each data item as a point in n-dimensional space (where n is number of features you have) with the value of each feature being the value of a particular coordinate.

For example, if we only had two features like Height and Hair length of an individual, we’d first plot these two variables in two dimensional space where each point has two co-ordinates (these co-ordinates are known as Support Vectors)

SVM1

Now, we will find some line that splits the data between the two differently classified groups of data. This will be the line such that the distances from the closest point in each of the two groups will be farthest away.

SVM2

In the example shown above, the line which splits the data into two differently classified groups is the black line, since the two closest points are the farthest apart from the line. This line is our classifier. Then, depending on where the testing data lands on either side of the line, that’s what class we can classify the new data as.

More: Simplified Version of Support Vector Machine

Think of this algorithm as playing JezzBall in n-dimensional space. The tweaks in the game are:

  • You can draw lines / planes at any angles (rather than just horizontal or vertical as in classic game)
  • The objective of the game is to segregate balls of different colors in different rooms.
  • And the balls are not moving.

 

Python Code

#Import Library
from sklearn import svm
#Assumed you have, X (predictor) and Y (target) for training data set and x_test(predictor) of test_dataset
# Create SVM classification object 
model = svm.svc() # there is various option associated with it, this is simple for classification. You can refer link, for mo# re detail.
# Train the model using the training sets and check score
model.fit(X, y)
model.score(X, y)
#Predict Output
predicted= model.predict(x_test)

R Code

library(e1071)
x # Fitting model
fit <-svm(y_train ~ ., data = x)
summary(fit)
#Predict Output 
predicted= predict(fit,x_test)

 

5. Naive Bayes

It is a classification technique based on Bayes’ theorem with an assumption of independence between predictors. In simple terms, a Naive Bayes classifier assumes that the presence of a particular feature in a class is unrelated to the presence of any other feature. For example, a fruit may be considered to be an apple if it is red, round, and about 3 inches in diameter. Even if these features depend on each other or upon the existence of the other features, a naive Bayes classifier would consider all of these properties to independently contribute to the probability that this fruit is an apple.

Naive Bayesian model is easy to build and particularly useful for very large data sets. Along with simplicity, Naive Bayes is known to outperform even highly sophisticated classification methods.

Bayes theorem provides a way of calculating posterior probability P(c|x) from P(c), P(x) and P(x|c). Look at the equation below:
Bayes_rule

Here,

  • P(c|x) is the posterior probability of class (target) given predictor (attribute).
  • P(c) is the prior probability of class.
  • P(x|c) is the likelihood which is the probability of predictor given class.
  • P(x) is the prior probability of predictor.

Example: Let’s understand it using an example. Below I have a training data set of weather and corresponding target variable ‘Play’. Now, we need to classify whether players will play or not based on weather condition. Let’s follow the below steps to perform it.

Step 1: Convert the data set to frequency table

Step 2: Create Likelihood table by finding the probabilities like Overcast probability = 0.29 and probability of playing is 0.64.

Bayes_4

Step 3: Now, use Naive Bayesian equation to calculate the posterior probability for each class. The class with the highest posterior probability is the outcome of prediction.

Problem: Players will pay if weather is sunny, is this statement is correct?

We can solve it using above discussed method, so P(Yes | Sunny) = P( Sunny | Yes) * P(Yes) / P (Sunny)

Here we have P (Sunny |Yes) = 3/9 = 0.33, P(Sunny) = 5/14 = 0.36, P( Yes)= 9/14 = 0.64

Now, P (Yes | Sunny) = 0.33 * 0.64 / 0.36 = 0.60, which has higher probability.

Naive Bayes uses a similar method to predict the probability of different class based on various attributes. This algorithm is mostly used in text classification and with problems having multiple classes.

Python Code

#Import Library
from sklearn.naive_bayes import GaussianNB
#Assumed you have, X (predictor) and Y (target) for training data set and x_test(predictor) of test_dataset
# Create SVM classification object model = GaussianNB() # there is other distribution for multinomial classes like Bernoulli Naive Bayes, Refer link
# Train the model using the training sets and check score
model.fit(X, y)
#Predict Output
predicted= model.predict(x_test)

R Code

library(e1071)
x # Fitting model
fit <-naiveBayes(y_train ~ ., data = x)
summary(fit)
#Predict Output 
predicted= predict(fit,x_test)

 

6. KNN (K- Nearest Neighbors)

It can be used for both classification and regression problems. However, it is more widely used in classification problems in the industry. K nearest neighbors is a simple algorithm that stores all available cases and classifies new cases by a majority vote of its k neighbors. The case being assigned to the class is most common amongst its K nearest neighbors measured by a distance function.

These distance functions can be Euclidean, Manhattan, Minkowski and Hamming distance. First three functions are used for continuous function and fourth one (Hamming) for categorical variables. If K = 1, then the case is simply assigned to the class of its nearest neighbor. At times, choosing K turns out to be a challenge while performing KNN modeling.

More: Introduction to k-nearest neighbors : Simplified.

KNN

KNN can easily be mapped to our real lives. If you want to learn about a person, of whom you have no information, you might like to find out about his close friends and the circles he moves in and gain access to his/her information!

Things to consider before selecting KNN:

  • KNN is computationally expensive
  • Variables should be normalized else higher range variables can bias it
  • Works on pre-processing stage more before going for KNN like outlier, noise removal

Python Code

#Import Library
from sklearn.neighbors import KNeighborsClassifier
#Assumed you have, X (predictor) and Y (target) for training data set and x_test(predictor) of test_dataset
# Create KNeighbors classifier object model 
KNeighborsClassifier(n_neighbors=6) # default value for n_neighbors is 5
# Train the model using the training sets and check score
model.fit(X, y)
#Predict Output
predicted= model.predict(x_test)

R Code

library(knn)
x # Fitting model
fit <-knn(y_train ~ ., data = x,k=5)
summary(fit)
#Predict Output 
predicted= predict(fit,x_test)

 

7. K-Means

It is a type of unsupervised algorithm which  solves the clustering problem. Its procedure follows a simple and easy  way to classify a given data set through a certain number of  clusters (assume k clusters). Data points inside a cluster are homogeneous and heterogeneous to peer groups.

Remember figuring out shapes from ink blots? k means is somewhat similar this activity. You look at the shape and spread to decipher how many different clusters / population are present!

splatter_ink_blot_texture_by_maki_tak-d5p6zph

How K-means forms cluster:

  1. K-means picks k number of points for each cluster known as centroids.
  2. Each data point forms a cluster with the closest centroids i.e. k clusters.
  3. Finds the centroid of each cluster based on existing cluster members. Here we have new centroids.
  4. As we have new centroids, repeat step 2 and 3. Find the closest distance for each data point from new centroids and get associated with new k-clusters. Repeat this process until convergence occurs i.e. centroids does not change.

How to determine value of K:

In K-means, we have clusters and each cluster has its own centroid. Sum of square of difference between centroid and the data points within a cluster constitutes within sum of square value for that cluster. Also, when the sum of square values for all the clusters are added, it becomes total within sum of square value for the cluster solution.

We know that as the number of cluster increases, this value keeps on decreasing but if you plot the result you may see that the sum of squared distance decreases sharply up to some value of k, and then much more slowly after that. Here, we can find the optimum number of cluster.

Kmenas

Python Code

#Import Library
from sklearn.cluster import KMeans
#Assumed you have, X (attributes) for training data set and x_test(attributes) of test_dataset
# Create KNeighbors classifier object model 
k_means = KMeans(n_clusters=3, random_state=0)
# Train the model using the training sets and check score
model.fit(X)
#Predict Output
predicted= model.predict(x_test)

R Code

library(cluster)
fit

 

8. Random Forest

Random Forest is a trademark term for an ensemble of decision trees. In Random Forest, we’ve collection of decision trees (so known as “Forest”). To classify a new object based on attributes, each tree gives a classification and we say the tree “votes” for that class. The forest chooses the classification having the most votes (over all the trees in the forest).

Each tree is planted & grown as follows:

  1. If the number of cases in the training set is N, then sample of N cases is taken at random but with replacement. This sample will be the training set for growing the tree.
  2. If there are M input variables, a number m<
  3. Each tree is grown to the largest extent possible. There is no pruning.

For more details on this algorithm, comparing with decision tree and tuning model parameters, I would suggest you to read these articles:

  1. Introduction to Random forest – Simplified

  2. Comparing a CART model to Random Forest (Part 1)

  3. Comparing a Random Forest to a CART model (Part 2)

  4. Tuning the parameters of your Random Forest model

Python

#Import Library
from sklearn.ensemble import RandomForestClassifier
#Assumed you have, X (predictor) and Y (target) for training data set and x_test(predictor) of test_dataset
# Create Random Forest object
model= RandomForestClassifier()
# Train the model using the training sets and check score
model.fit(X, y)
#Predict Output
predicted= model.predict(x_test)

R Code

library(randomForest)
x # Fitting model
fit summary(fit)
#Predict Output 
predicted= predict(fit,x_test)

 

9. Dimensionality Reduction Algorithms

In the last 4-5 years, there has been an exponential increase in data capturing at every possible stages. Corporates/ Government Agencies/ Research organisations are not only coming with new sources but also they are capturing data in great detail.

For example: E-commerce companies are capturing more details about customer like their demographics, web crawling history, what they like or dislike, purchase history, feedback and many others to give them personalized attention more than your nearest grocery shopkeeper.

As a data scientist, the data we are offered also consist of many features, this sounds good for building good robust model but there is a challenge. How’d you identify highly significant variable(s) out 1000 or 2000? In such cases, dimensionality reduction algorithm helps us along with various other algorithms like Decision Tree, Random Forest, PCA, Factor Analysis, Identify based on correlation matrix, missing value ratio and others.

To know more about this algorithms, you can read “Beginners Guide To Learn Dimension Reduction Techniques“.

Python  Code

#Import Library
from sklearn import decomposition
#Assumed you have training and test data set as train and test
# Create PCA obeject pca= decomposition.PCA(n_components=k) #default value of k =min(n_sample, n_features)
# For Factor analysis
#fa= decomposition.FactorAnalysis()
# Reduced the dimension of training dataset using PCA
train_reduced = pca.fit_transform(train)
#Reduced the dimension of test dataset
test_reduced = pca.transform(test)
#For more detail on this, please refer  this link.

R Code

library(stats)
pca train, cor = TRUE)
train_reduced  train)
test_reduced  test)

 

10. Gradient Boosting & AdaBoost

GBM & AdaBoost are boosting algorithms used when we deal with plenty of data to make a prediction with high prediction power. Boosting is an ensemble learning algorithm which combines the prediction of several base estimators in order to improve robustness over a single estimator. It combines multiple weak or average predictors to a build strong predictor. These boosting algorithms always work well in data science competitions like Kaggle, AV Hackathon, CrowdAnalytix.

More: Know about Gradient and AdaBoost in detail

Python Code

#Import Library
from sklearn.ensemble import GradientBoostingClassifier
#Assumed you have, X (predictor) and Y (target) for training data set and x_test(predictor) of test_dataset
# Create Gradient Boosting Classifier object
model= GradientBoostingClassifier(n_estimators=100, learning_rate=1.0, max_depth=1, random_state=0)
# Train the model using the training sets and check score
model.fit(X, y)
#Predict Output
predicted= model.predict(x_test)

R Code

library(caret)
x # Fitting model
fitControl predicted= predict(fit,x_test,type= "prob")[,2] 

GradientBoostingClassifier and Random Forest are two different boosting tree classifier and often people ask about the difference between these two algorithms.

End Notes

By now, I am sure, you would have an idea of commonly used machine learning algorithms. My sole intention behind writing this article and providing the codes in R and Python is to get you started right away. If you are keen to master machine learning, start right away. Take up problems, develop a physical understanding of the process, apply these codes and see the fun!

Did you find this article useful ? Share your views and opinions in the comments section below.

If you like what you just read & want to continue your analytics learning, subscribe to our emailsfollow us on twitter or like our facebook page.

Some lesser-known truths about programming

Source: http://automagical.rationalmind.net/2010/08/17/some-lesser-known-truths-about-programming/

My experience as a programmer  has taught me a few things about writing software. Here are some things that people might find surprising about writing code:

  • Averaging over the lifetime of the project, a programmer spends about 10-20% of his time writing code, and most programmers write about 10-12 lines of code per day that goes into the final product, regardless of their skill level. Good programmers spend much of the other 90% thinking, researching, and experimenting to find the best design. Bad programmers spend much of that 90% debugging code by randomly making changes and seeing if they work.
  • A good programmer is ten times more productive than an average programmer. A great programmer is 20-100 times more productive than the average. This is not an exaggeration – studies since the 1960’s have consistently shown this. A bad programmer is not just unproductive – he will not only not get any work done, but create a lot of work and headaches for others to fix.“A great lathe operator commands several times the wage of an average lathe operator, but a great writer of software code is worth 10,000 times the price of an average software writer.” –Bill Gates
  • Great programmers spend little of their time writing code – at least code that ends up in the final product. Programmers who spend much of their time writing code are too lazy, too ignorant, or too arrogant to find existing solutions to old problems. Great programmers are masters at recognizing and reusing common patterns. Good programmers are not afraid to refactor (rewrite) their code  to reach the ideal design. Bad programmers write code which lacks conceptual integrity, non-redundancy, hierarchy, and patterns, and so is very difficult to refactor. It’s easier to throw away bad code and start over than to change it.
  • Software development obeys the laws of entropy, like any other process. Continuous change leads to software rot, which erodes the conceptual integrity of the original design. Software rot is unavoidable, but programmers who fail to take conceptual integrity into consideration create software that rots so so fast that it becomes worthless before it is even completed. Entropic failure of conceptual integrity is probably the most common reason for software project failure. (The second most common reason is delivering something other than what the customer wanted.) Software rot slows down progress exponentially, so many projects face exploding timelines and budgets before they are mercifully killed.
  • A 2004 study found that most software projects (51%) will fail in a critical aspect, and 15% will fail totally. This is an improvement since 1994, when 31% failed.
  • Although most software is made by teams, it is not a democratic activity. Usually, just one person is responsible for the design, and the rest of the team fills in the details.
  • Programming is hard work. It’s an intense mental activity. Good programmers think about their work 24/7. They write their most important code in the shower and in their dreams. Because the most important work is done away from a keyboard, software projects cannot be accelerated by spending more time in the office or adding more people to a project.

前端工程师是怎样一种职业

2016-03-20 吕大豹 前端开发

来自:医小生与程序猿(微信号:doctor_programmer)

链接:http://www.cnblogs.com/lvdabao/p/5229640.html

前端工程师已经是大家不再陌生的一个软件行业的工种了,尽管这一工种诞生也没几年。作为一名从业三年的前端工程师,我尝试结合业界标准与我的理解,来尽可能诠释一下前端工程师这个职业。这篇文章的适读人群为:非web方向的软件开发者、产品经理以及与产品挂钩的相关人士、正在纠结需不需要招聘一个前端的老板们、刚刚走上工作岗位的前端新手们、以及所有对前端感兴趣的父老乡亲们。

前端工程师的英文名为front-end engineer,简称FE,下文将用FE来代称。国内最早开始招聘FE应该是2011年左右的事情吧,在此之前,FE的工作基本都是由服务端工程师包办的,或者是由设计师来产出HTML页面。那么,是什么样的原因催生出了FE这一职位呢?本文将从FE的工作内容、专业FE应具备的技能和品质来聊聊这个职业。

用户体验的操刀者

前端工程师的首要工作就是开发用户界面,在web系统中,就是指网页了。为什么网页需要专门的FE来写呢?答案就是「用户体验」。随着web2.0概念的普及以及web3.0的提出,用户成为互联网的主要生产者,网页所承载的功能越来越多。

一方面,企业的「用户体验」诉求很强烈。这个很容易就能理解,如果你的产品看上去就像个钓鱼网站而且还特别难用,就会有一部分用户离你而去。非互联网企业呢?也会面临这样的情况,你花了很大的功夫优化数据库,优化服务器负载,你的客户却很难感知到你的努力。你的系统界面还是八九十年代的风格,客户的第一感觉就是这系统不行,不买你的帐。相反,如果你花一点时间做一套崭新风格的界面出来,客户的第一感觉就是这个系统好炫酷,技术含量很高。不要小看这个第一感觉,对于外行人来说,第一感觉往往起到了决定作用。好多企业都意识到了这一点,所以对用户体验的诉求就上去了。

另一方面,现在的用户也都很挑剔。毕竟他们使用的产品一个比一个炫,都被惯坏了,你的产品稍有点不爽的地方,就上微博去给你宣传。

前端工程师是用户体验的把控者,在产品经理构想出交互原型,设计师设计出交互细节后,FE就用他的双手一行行敲出这些代码。他敲出的每一个按钮,每一张图片,都被成千上万的用户点击着,FE与用户可以说是“零距离接触”。作为产品交互的实现者,除了HTML、CSS这两门语言要精通外,对前端要求更高的其实是非技术因素。

FE需要对用户体验有较深的理解。比如页面上有一个超链接,字体比较小的情况下,用户可能会一下点不中,因为链接的可点击区域是紧贴着文字边缘的。前端可以通过很简单的方法来扩大这个链接的可点击区域,使得用户更容易点中。这就是用户体验,正如《瞬间之美》中提到的那样,touch到用户的内心只需要一瞬间。对用户体验的理解,还体现在对一些交互常识的把握上。比如用户操作某个软件的界面,会感觉它很灵巧,却具体说不出到底是哪里。那么很可能是这个界面上的按钮有着设计良好的四态(正常、鼠标移上、鼠标按下、不可用),它会随时对你的操作给出反馈。

懂用户体验的前端工程师,会让他的作品与用户沟通,能够touch到用户心中那一块柔软的区域。

FE需要有一点强迫症。这体现在对任何瑕疵的不容忍。比如采用技术手段让页面的滚动更平滑些,减少页面的视觉抖动,像素级别的定位校准。当用户触碰的内容是一串非电话号码的数字时,不要让手机自动调出拨号功能,等等。很多细节是产品经理无法感知的,因为这些都是很零散的技术手段,只有靠FE来点滴积累。再有极致者,追求让页面的响应时间再减少几个毫秒,让你的手机少耗几KB流量,少耗一些电量。这些甚至连用户都无法感知,但是当你的用户有百万级别或者千万级别,这样做的价值就显现出来了。

前端工程师需要是一个心思细腻之人,需要对美有所领悟,需要执着地追求完美,需要有品味,有思想,有大局观,最好还能懂点心理学。

用户端业务逻辑

做出优雅的界面只是前端工程师的第一步,编程也是必备技能,FE承担着处理用户端业务逻辑的任务。放在以前,用户端就是个IE浏览器,没有什么业务逻辑可言。但现在不同了,用户使用浏览器发表文章、进行社交活动,更复杂的能使用在线工具完成工作。

javascript就是FE需要掌握的编程语言,他应该通晓这门语言的优势和缺点,掌握各种编程思想、开发模式。利用各种技巧实现交互越来越丰富的界面,同时还要与服务端的工程师沟通,调试接口,完成:页面展示——响应用户操作——提交用户数据——反馈操作结果这一系列流程。

从这一点上,要求前端工程师要有软件开发的基础,了解计算机的基本原理,网络通信的基本原理,所以计算机相关专业出身的前端会更有优势一些。

前端也需要架构

写写网页也要架构?有什么好架的?回答这个问题首先得明确一点,FE的工作内容已不再是「写写页面」这么简单。随着前端代码的规模越来越大,逐渐涌现出了模块化开发、MVC、MVVM等开发模式。团队规模也从原来的单兵作战演变为团队开发。

所以,一个高级前端工程师,要有架构能力。这个架构能力包括不限于:

  • 对现有优秀框架的了解与整合使用
  • 根据项目的业务特点构建出合适的开发模式
  • 设计前端测试方案保证代码质量
  • 用工程化方案组织起团队的开发流程。

向前延伸、向后延伸

物联网的市场越来越热了,手机是物联网体系中的一个关键节点。前端工程师的战场已不再是单纯的浏览器,将来会覆盖到各种「端设备」上。得益于javascript语言的灵活性,现在用javascript已经可以开发windows应用、ios应用、android应用,可以编写智能电视上的应用。将来,或许是VR、可穿戴设备、智能家电。这是前端可以向前延伸的方向。

另一方面,由于nodejs的横空出世,javascript这门语言竟然神奇的有了服务端的能力。之前用java、PHP做的事情,js同样可以实现了。本来前端阵营中就有一批人是从后端转过来的,有服务端开发的基础,得了nodejs这一利器,再加上现在市场的需求,快出产品,敏捷开发,前端工程师向后延伸的路线宽广而明亮。事实上,全栈工程师的概念在前年就被提出,BAT这样的业界领头羊早已用nodejs做一些基础设施的建设,而很多小而快的创业公司,也在用nodejs进行快速迭代开发。

持续学习

前端领域的技术更新相对于其他领域要快很多,原因大概也是因为这个领域离用户最近吧。有一些新的技术甚至是颠覆性的,前端工程师必须要跟上时代的步伐,否则你开发出的产品在体验上就落后别人一截了。

有一些市场人员提出的需求,产品经理根据多年的经验评估后觉得无法实现,就被打回了。而事实上,随着新技术的出现,有些你认为无法实现的功能已经可以在前端实现了。随着HTML5的支持度越来越高,前端拥有的能力也会越强。比如利用canvas能够获取到图片上的每一个像素点,这样前端就拥有了图像处理能力。有了FileReader API,前端拥有了本地文件的读取能力,还有地理位置获取等等。

而这些新东西,就需要前端工程师来不断学习。所以,一个称职的前端必须能够保持持续学习能力,能够对新技术有敏锐的嗅觉。活到老,学到老,说的就是前端工程师。

高情商的程序猿

大多数人对程序猿的印象就是情商低、不善言谈。但前端工程师应该是个例外,这是由工作性质决定的。

从工作流程来看,FE处于设计师的下游,他要接设计稿,转化为网页。同时又是后端工程师的上游,需要把用户产生的数据提交到服务端。横向来看,他又与产品经理有着密切接触,因为他可能随时和产品经理探讨交互的细节。这样一个连接着团队中的其他成员的角色,需要他既是一个粘合剂,又是一个润滑剂。

前端工程师需要有较高的沟通能力和理解能力。我们经常开玩笑说“设计师活在童话故事里”,因为有时候他们设计的页面根本不符合常规,无法实现。这个时候你就需要耐心的给设计MM讲原理、讲原因,并且告诉她设计需要遵循哪些基本规范。对于产品经理的思想,你要能把握到位,你得理解他比划了半天到底是想要做什么。与后端工程师打交道的时候,你又得马上化身编程达人,跟他们聊数据类型,聊面向对象,聊设计模式。

你需要能随时切换角色,切换你的表达方式和谈话内容。所以,你得是一只高情商的程序员。

以上就是我对前端工程师的理解,前端的门槛低,但要成为一名专业的前端工程师,需要掌握的东西太多了。除了前端技术外,我认为前端更重要的是综合能力,包括我上面谈到的思维细腻、有品味、有思想、情商高等等。毕竟你要通过代码与用户产生接触,给用户带来愉悦感。从某种程度上来说,你得是一个好恋人。