Was Knuth Really Framed by Jon Bentley?

栏目: IT技术 · 发布时间: 4年前

内容简介:Recently, the formal methods specialistHere is the commented version of the original pipeline that McIlroy devised.And here is the version solving the problem that Hillel Wayne claimed would be difficult to solve with a Unix pipeline. It turns out that thi

Recently, the formal methods specialist Hillel Wayne posted an interesting article discussing whether Donald Knuth was actually framed when Jon Bentley asked him to demonstrate literate programming . (Knuth came up with an 8-page long monolithic listing, whereas in a critique Doug McIlroy provided a six line shell script.) The article makes many interesting and valid points. However, among the raised points one is that the specified problem was ideal for solving with Unix tools, and that a different problem, such as “find the top K pairs of words and print the Levenshtein distance between each pair", would be much more difficult to solve with Unix commands. As the developer of an edX massive open open online course (MOOC) on the use of Unix Tools for data, software and production engineering I decided to put this claim to test.

Here is the commented version of the original pipeline that McIlroy devised.

# Split text into words by replacing non-word characters with newlines
tr -cs A-Za-z '\n' |
# Convert uppercase to lowercase
tr A-Z a-z |
# Sort so that identical words occur adjacently
sort |
# Count occurrences of each line
uniq -c |
# Sort numerically by decreasing number of word occurrences
sort -rn |
# Quit after printing the K specified number of words
sed ${1}q

And here is the version solving the problem that Hillel Wayne claimed would be difficult to solve with a Unix pipeline. It turns out that this can also be done in a pipeline of just nine (non commented) lines.

# Split text into words by replacing non-word characters with newlines
tr -cs A-Za-z '\n' |
# Convert uppercase to lowercase
tr A-Z a-z |
# Make pairs out of words by testing and storing the previous word
awk 'prev {print prev, $1} {prev = $1}' |
# Sort so that identical words occur adjacently
sort |
# Count occurrences of each line
uniq -c |
# Sort numerically by decreasing number of word occurrences
sort -nr |
# Print the K specified number of pairs
head -n $1 |
# Remove the occurrence count, keeping the two words
awk '{print $2, $3}' |
# Print the Levenshtein distance between word pair (autosplit into @F)
perl -a -MText::LevenshteinXS -e 'print distance(@F), "\n"'

One may claim that I cheated above by invoking Perl and using the Text::LevenshteinXS module. But the reuse of existing tools, rather than the building of monoliths is exactly the Unix command line philosophy. In fact, one of the reasons I sometimes prefer using Perl over Python is that it's very easy to incorporate into modular Unix tool pipelines. In contrast, Python encourages the creation of monoliths of the type McIlroy criticized.

Regarding my choice of awk for obtaining word pairs, note that this can also be done with the command sed -n 'H;x;s/\n/ /;p;s/.* //;x' . However, I find the awk version much more readable.

Through this demonstration I haven't proven that Bentley didn't frame Knuth; it seems that at some point McIlroy admitted that the criticism was unfair. However, I did show that a counter-example chosen specifically to demonstrate the limits of the Unix pipeline processing power, is in fact quite easy to implement with just three additional commands. So my claim is that the power of the Unix tools is often vastly underestimated.

In my everyday work, I use Unix commands many times daily to perform diverse and very different tasks. I very rarely encounter tasks that cannot be solved by joining together a couple of commands. The automated editing of a course's videos and animations was such a task. Even in those cases, what I typically do is write a small script or program in order to complement a Unix tools pipeline or make -based workflow.

Read and post comments , or share through   

以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

C++编程思想(第1卷)

C++编程思想(第1卷)

[美] Bruce Eckel / 刘宗田、袁兆山、潘秋菱 / 机械工业出版社 / 2002-9 / 59.00元

《C++编程思考》第2版与第1版相比,在章节安排上有以下改变。增加了两章:“对象的创建与使用”和“C++中的C”,前者与“对象导言”实际上是第1版“对象的演化”一章的彻底重写,增加了近几年面向对象方法和编程方法的最瓣研究与实践的有效成果,后者的添加使不熟悉C的读者可以直接使用这本书。删去了四章:“输入输出流介绍”、“多重继承”、“异常处理”和“运行时类型识别”,删去的内容属于C++中较复杂的主题,......一起来看看 《C++编程思想(第1卷)》 这本书的介绍吧!

RGB转16进制工具
RGB转16进制工具

RGB HEX 互转工具

Base64 编码/解码
Base64 编码/解码

Base64 编码/解码

UNIX 时间戳转换
UNIX 时间戳转换

UNIX 时间戳转换