Was Knuth Really Framed by Jon Bentley?

栏目: IT技术 · 发布时间: 5年前

内容简介:Recently, the formal methods specialistHere is the commented version of the original pipeline that McIlroy devised.And here is the version solving the problem that Hillel Wayne claimed would be difficult to solve with a Unix pipeline. It turns out that thi

Recently, the formal methods specialist Hillel Wayne posted an interesting article discussing whether Donald Knuth was actually framed when Jon Bentley asked him to demonstrate literate programming . (Knuth came up with an 8-page long monolithic listing, whereas in a critique Doug McIlroy provided a six line shell script.) The article makes many interesting and valid points. However, among the raised points one is that the specified problem was ideal for solving with Unix tools, and that a different problem, such as “find the top K pairs of words and print the Levenshtein distance between each pair", would be much more difficult to solve with Unix commands. As the developer of an edX massive open open online course (MOOC) on the use of Unix Tools for data, software and production engineering I decided to put this claim to test.

Here is the commented version of the original pipeline that McIlroy devised.

# Split text into words by replacing non-word characters with newlines
tr -cs A-Za-z '\n' |
# Convert uppercase to lowercase
tr A-Z a-z |
# Sort so that identical words occur adjacently
sort |
# Count occurrences of each line
uniq -c |
# Sort numerically by decreasing number of word occurrences
sort -rn |
# Quit after printing the K specified number of words
sed ${1}q

And here is the version solving the problem that Hillel Wayne claimed would be difficult to solve with a Unix pipeline. It turns out that this can also be done in a pipeline of just nine (non commented) lines.

# Split text into words by replacing non-word characters with newlines
tr -cs A-Za-z '\n' |
# Convert uppercase to lowercase
tr A-Z a-z |
# Make pairs out of words by testing and storing the previous word
awk 'prev {print prev, $1} {prev = $1}' |
# Sort so that identical words occur adjacently
sort |
# Count occurrences of each line
uniq -c |
# Sort numerically by decreasing number of word occurrences
sort -nr |
# Print the K specified number of pairs
head -n $1 |
# Remove the occurrence count, keeping the two words
awk '{print $2, $3}' |
# Print the Levenshtein distance between word pair (autosplit into @F)
perl -a -MText::LevenshteinXS -e 'print distance(@F), "\n"'

One may claim that I cheated above by invoking Perl and using the Text::LevenshteinXS module. But the reuse of existing tools, rather than the building of monoliths is exactly the Unix command line philosophy. In fact, one of the reasons I sometimes prefer using Perl over Python is that it's very easy to incorporate into modular Unix tool pipelines. In contrast, Python encourages the creation of monoliths of the type McIlroy criticized.

Regarding my choice of awk for obtaining word pairs, note that this can also be done with the command sed -n 'H;x;s/\n/ /;p;s/.* //;x' . However, I find the awk version much more readable.

Through this demonstration I haven't proven that Bentley didn't frame Knuth; it seems that at some point McIlroy admitted that the criticism was unfair. However, I did show that a counter-example chosen specifically to demonstrate the limits of the Unix pipeline processing power, is in fact quite easy to implement with just three additional commands. So my claim is that the power of the Unix tools is often vastly underestimated.

In my everyday work, I use Unix commands many times daily to perform diverse and very different tasks. I very rarely encounter tasks that cannot be solved by joining together a couple of commands. The automated editing of a course's videos and animations was such a task. Even in those cases, what I typically do is write a small script or program in order to complement a Unix tools pipeline or make -based workflow.

Read and post comments , or share through   

以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

计算统计

计算统计

Geof H.Givens、Jennifer A.Hoeting / 王兆军、刘民千、邹长亮、杨建峰 / 人民邮电出版社 / 2009-09-01 / 59.00元

随着计算机的快速发展, 数理统计中许多涉及大计算量的有效方法也得到了广泛应用与迅猛发展, 可以说, 计算统计已是统计中一个很重要的研究方向. 本书既包含一些经典的统计计算方法, 如求解非线性方程组的牛顿方法、传统的随机模拟方法等, 又全面地介绍了近些年来发展起来的某些新方法, 如模拟退火算法、基因算法、EM算法、MCMC方法、Bootstrap方法等, 并通过某些实例, 对这些方法的应用进行......一起来看看 《计算统计》 这本书的介绍吧!

RGB转16进制工具
RGB转16进制工具

RGB HEX 互转工具

随机密码生成器
随机密码生成器

多种字符组合密码

MD5 加密
MD5 加密

MD5 加密工具