内容简介:Software development is perpetually in a state of flux. Coders are constantly fighting a battle to keep their skills relevant. Each year brings new methodologies, frameworks, and languages to learn. Within the context of a highly-complex and rapidly changi
Software development is perpetually in a state of flux. Coders are constantly fighting a battle to keep their skills relevant. Each year brings new methodologies, frameworks, and languages to learn. Within the context of a highly-complex and rapidly changing industry, it’s important to find out which skills, tools, and trends are worthy of your time.
Each year, TNW asks the leaders in the software development world for their take. Let’s face it, if anyone’s going to be a fortune-teller for the industry, it’s those working on the front lines. Here’s what they had to say.
Automated code analysis will get better (and more ubiquitous)
Code analysis tools are nothing new, but they previously were the preserve of the well-heeled dev teams with cash to splurge. Now, there are free open-source alternatives that give the proprietary offerings a run for their money. And as the freebie tools rise in prominence, their adoption will continue to snowball, reckons Facebook research scientist Peter O’Hearn.
“There has been a tremendous amount of work on automating various testing and verification workflows, both in industry and academia. At Facebook, we have been investing in advanced static and dynamic analysis tools that employ symbolic and evolutionary reasoning techniques similar to those from program verification and computational search,” he told TNW.
The tools we develop in London [Infer and Sapienz] target issues related to crashes and stability, performing complex reasoning spanning tens of millions of code. And since Infer is open source, it can be easily integrated into development workflows in a way that brings value while minimizing friction for developers deploying code at scale.
Separately, we’ve seen moves from GitHub and GitLab to simplify the process of integrating source analysis into the entire lifecycle of code. With GitHub Actions, for example, it’s possible to check code for bugs and security flaws upon making a commit. You could reasonably argue these platforms will play an essential role in promoting the adoption of static and dynamic code analysis in the year to come.
Julia Silge, Data Scientist at Stack Overflow, echoed similar sentiments. She told TNW that those working in the field of workflow automation will become a valuable commodity in the upcoming year, as more firms jump on the DevOps bandwagon.
“At Stack Overflow, we see evidence that automation for software will be immensely important moving into 2020 and beyond. For example, GitHub Actions (GitHub’s API for building automated software workflows) is one of the fastest-growing new tags on Stack Overflow in the past year,” she told TNW.
We also see that software roles focused on the automation of building, deploying, testing, and monitoring code, such as DevOps practitioners and site reliability engineers, are among the highest-paid and most in-demand on our annual Developer Survey. These kinds of roles are eclipsing even other high-demand roles such as machine learning and data engineers in terms of compensation and how difficult they are to hire. We even see how important automation is in the products we ourselves build to make developers more productive; Stack Overflow for Teams integrates with other common productivity tools so that people who code can integrate knowledge sharing automatically into their existing routines.
Tool diversity breeds technical debt
Technical debt. Technical debt never changes.
Traditionally the finger of blame was pointed at legacy systems developed before the advent of modern software development practices (like microservices, source management, and agile). And while legacy systems often prove problematic, Puppet CTO Deepak Giridharagopal believes there’s a reckoning coming, and it’s all thanks to the heterogeneous nature of the contemporary software development world.
“While companies are constantly in a state of flux – adopting new technologies and patterns to better meet their needs – 2019 saw a lot of change in the world of infrastructure. The cloud and container ecosystems continue to expand and there was also heightened interest in more operationally focused areas like monitoring, tracing, observability, vulnerability management, and policy enforcement,” he told TNW.
But for all these underlying platform improvements, one truth remains inescapable: new applications are built more quickly than old ones are decommissioned. As new platforms get simpler, it’s quicker to build new applications on top. And as new platforms get more robust and reliable, those applications can have a longer lifetime. Doing the math, that means that over time, for those who have multiple applications and teams in play, the world will become increasingly heterogeneous. In 2020, as the months tick by, enterprises should expect to have an increasing variety of ‘vintages’ of their applications. Legacy apps from a decade ago or more. Apps from the last few years that were au courant in terms of their architecture and tech choices at the time. And new apps using what’s currently in fashion. It’s the infrastructure engineers, though, who have to rationalize and operate across all these different environments, technologies, and architectures. If, perhaps, 2019 was the year of ‘I can solve this infrastructure problem by adding this new tech.’ I fear 2020 may be the year of ‘now I have two problems.’
Want to work at Rijksoverheid? They’re hiring .
2020 will be the year of machine learning, data, and AI
Brian Dawson, DevOps Evangelist at CloudBees, believes it’s time for developer tools to get smarter, which will, in turn, improve coder efficiency.
“Developers will begin to see smart IDE’s, compilers, CI/CD pipeline tools, etc, which will learn capture data as they work, and learn behaviors, etc, acting as a virtual pair programmer helping identify errors, anti-patterns in code and practices (commit frequency, etc), as well as identifying and encouraging success behaviors and practices,” he told TNW.
Dawson rattled off a list of areas where he thinks AI can help developers in their working lives. As you might expect, it’s a fairly long list, mostly centered on providing feedback and preventing the kind of mistakes that bog down development schedules.
“Machine learning algorithms will be able to remind a developer when they have withheld a merge/pull-request too long, correlating the time of a pull requests to integration or test success of failures, as well as dynamically identifying what unit and functional tests should be run based on what code was changed by a developer and what functionality has been introduced. They will unlock streams of feedback to a developer based on usage of a flagged feature deployed to a segment of users in production, informing a developer on how user experience relates to code, and where additional focus may be needed,” he said.
Last but not least, they will help continuously identifying the likelihood of passing acceptance test, successful deployment, on-time delivery, etc, and make suggestions on how to remove blockers and increase chances of success.
Serverless grows up
The enthusiasm surrounding serverless computing has been phenomenal to witness. Predictably, serverless has featured heavily in previous editions of this annual post, mostly centered around the almost religious war currently being fought by Docker Swarm vs Kubernetes.
Chris Yates, VP of Marketing at Platform, believes that serverless computing still has some way to go, particularly beyond the core tasks of deploying, scaling, and monitoring applications.
“2020 will be the year of serverless, but not in the way you think,” he told TNW.
Developers have been spending an enormous amount of time on everything *except* making software that solves problems. ‘DevOps’ has transmogrified from ‘developers releasing software’ into ‘developers building ever more complex infrastructure atop Kubernetes’ and ‘developers reinventing their software as distributed stateless functions.’ In 2020, ‘serverless’ will mature. Handle state. Handle data storage without requiring devs to learn yet-another-proprietary-database-service. Learning new stuff is fun-but shipping is even better, and we’ll finally see systems and services that support that.
Expanding on that point is Markus Eisele, Developer Adoption Lead EMEA at IBM-owned Linux vendor Red Hat:
“The upcoming year will be the first to make complex infrastructure accessible and scalable for software development teams. Code Ready Workspaces and local container runtimes deliver excellent developer experiences with a specialized command-lineinterface (CLI), as for example the open-source odo project,” he said.
Deep integrations into existing development environments bridge gaps to Kubernetes-native continuous delivery (CD) mechanisms (e.g. Tekton Pipelines). Optimized frameworks will speed up local developments while easing the way into productions like the open-source Quarkus project, which piloted with a 1.0 release in November 2019.
Over to you
You’ve made it this far, so tell me: Do you agree with what’s been said? Or do you disagree entirely and have your own bold predictions? Let me know in the comments below, or by reaching out on Twitter .
And if you’d like to compare against our previous predictions, click here for our 2019 article , and here for our 2018 musings .
This article is brought to you by Rijksoverheid.
Published January 15, 2020 — 13:07 UTC
以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
算法导论(原书第3版)
Thomas H.Cormen、Charles E.Leiserson、Ronald L.Rivest、Clifford Stein / 殷建平、徐云、王刚、刘晓光、苏明、邹恒明、王宏志 / 机械工业出版社 / 2012-12 / 128.00元
在有关算法的书中,有一些叙述非常严谨,但不够全面;另一些涉及了大量的题材,但又缺乏严谨性。本书将严谨性和全面性融为一体,深入讨论各类算法,并着力使这些算法的设计和分析能为各个层次的读者接受。全书各章自成体系,可以作为独立的学习单元;算法以英语和伪代码的形式描述,具备初步程序设计经验的人就能看懂;说明和解释力求浅显易懂,不失深度和数学严谨性。 全书选材经典、内容丰富、结构合理、逻辑清晰,对本科......一起来看看 《算法导论(原书第3版)》 这本书的介绍吧!