Jump to content

Search the Community

Showing results for tags 'linux kernel'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Categories

  • Drivers
  • Filesharing
    • BitTorrent
    • eDonkey & Direct Connect (DC)
    • NewsReaders (Usenet)
    • Other P2P Clients & Tools
  • Internet
    • Download Managers & FTP Clients
    • Messengers
    • Web Browsers
    • Other Internet Tools
  • Multimedia
    • Codecs & Converters
    • Image Viewers & Editors
    • Media Players
    • Other Multimedia Software
  • Security
    • Anti-Malware
    • Firewalls
    • Other Security Tools
  • System
    • Benchmarking & System Info
    • Customization
    • Defrag Tools
    • Disc & Registry Cleaners
    • Management Suites
    • Other System Tools
  • Other Apps
    • Burning & Imaging
    • Document Viewers & Editors
    • File Managers & Archivers
    • Miscellaneous Applications
  • Linux Distributions

Categories

  • General News
  • File Sharing News
  • Mobile News
  • Software News
  • Security & Privacy News
  • Technology News

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Found 9 results

  1. Greg Kroah-Hartman, the Linux Foundation fellow currently responsible for stable Linux kernel releases shared the lessons he’s learned as a kernel developer that are applicable to other developers at this year’s Linux App Summit. Kroah-Hartman has been helping develop the Linux kernel for over 20 years — “way too long,” he jokes — but reminded the audience that he has done some userspace work and maintains some small Linux userspace packages. He started by showing how he could succinctly distill the essence of the talk into a single five-word slide: “Don’t make your users mad.” “Let’s go into a little more detail, though,” he adds with a laugh. “I’ll try to explain why you don’t want to make users mad, and what happens when you do.” “So, all this stuff comes down to a pretty simple thing: who are you writing code for?” It’s important because what you should do depends on why you’re doing it, he explains, starting with the easiest scenario. If you’re writing code for yourself, “Throw it over the wall and have fun.” But if you’re writing it for others, “Now you have to care about users.” “Within reason,” he adds, sharing a corner-case example from what he calls the infamous XKCD cartoon illustrating how users can complain about even the most essential of fixes — like “the CPU no longer overheats when you hold down spacebar.” “Everything I say here is ‘within reason’, because there are always some users you can never please, no matter what. And those people you can safely ignore.” Kroah-Hartman explains that’s one of Linus Torvalds’ most deeply-held convictions: don’t break userspace. “Other operating systems have this rule as well — it’s a very solid rule — because we always want you to upgrade. And we want you to upgrade without worrying about it. We don’t want you to feel scared. If you see a new release, and we say, ‘Hey, this fixes a bunch of problems,’ we don’t want you to feel worried about taking that. That’s really really important — especially with security.” This leads to what Kroah-Hartman called Greg’s rule #1: “Users will not update if they are afraid you will break their current system.” “We’ve learned this the hard way…” Evolving with Users This obviously informs the way that changes ultimately get made. “The trick is you just support everything until there are no users,” Kroah-Hartman explained. “Figuring out when there are and are not users is up to you.” Kroah-Hartman told a remarkable story about the time where the Linux kernel developers realized one particular architecture was only being used by exactly two machines in existence. “One of them broke, and the other one — we actually paid the developer to replace it, because we didn’t want to support that thing anymore.” “But otherwise, we have to keep supporting users. Because we made that contract. We made that contract that we gave them a functionality. We cannot remove it.” He points out that Linux can still support 25-year-old binaries, so “People think Linux has stagnated. But in reality, we’ve created new functionalities right next to the old functionalities, that you can do different things — you can take advantage of stuff.” He provided another very prominent example of an application evolving slowly that a few notice: KDE. In the Real World Another piece of advice for application developers: Don’t take away things that work. “If somebody is using this functionality, keep it there. It’s that simple… Don’t think by removing it, you’re doing them a favor.” When he turned his attention to library developers, his first slide simply read: “I pity you.” “You never know if an API is really useful, until you have too many people using it to ever be able to change it.”—Greg Kroah-Hartman “This is the hardest job ever,” he said with a laugh. “I really, really feel sorry for you… It’s one of the hardest things to do.” “You can never know if your API actually works until you have a lot of people using it. And by the time you have a lot of people using it, then you realize all the problems in it — and then you can’t change it.” He said, laughing. “Getting it right the first time is almost impossible.” There is one alternative — but it still involves honoring the users who are still using that old library that you want to deprecate. If you really need to make a fresh start — “do that as a whole new library,” he advised. “And then you have no users! And then you don’t know if anybody’s using your API, you don’t know if you got it right, until you have users, and then it’s broken — and the cycle continues.” He also offered an important tip for developers who actually want their users to switch over to that new library. “You need to document it really well — because if you don’t Stack Overflow is going to try to do it.” He laughs knowingly, remembering a flood of bad drivers and submissions that were apparently inspired by a 10-year-old Stack Overflow post. “You don’t want Stack Overflow to be your primary documentation. Provide good documentation.” But then he added seriously, “If you violate those things, you cause people to lose trust. If I rebuild and see ‘deprecated’ all over the place or APIs changed, it’s just horrible. You don’t want to do that. If you violate that trust, you won’t have that user anymore.” If you do make a change, make sure there truly is a compelling reason. “You have to provide enough reason and enough goodness to force somebody to take the time to learn to do something else. That’s very rare.” His example of this was systemd, which unified a variety of service configurations and initialization processes. “They did it right. They provided all the functionality, they solved a real problem that was there. They unified all these existing tools and problems in such a way that it was just so much better to use, and it provided enough impetus that everybody was willing to do the work to modify their own stuff and move to the new model. “It worked. People still complain about it, but it worked. Everybody switched… It works well. It solves a real problem. “That was an example of how you can provide a compelling reason to move on — and make the change.” Source
  2. Google Project Zero scolds Samsung and other vendors for adding features that undermine Android security. Samsung's attempt to prevent attacks on Galaxy phones by modifying kernel code ended up exposing it to more security bugs, according to Google Project Zero (GPZ). Not only are smartphone makers like Samsung creating more vulnerabilities by adding downstream custom drivers for direct hardware access to Android's Linux kernel, vendors would be better off using security features that already exist in the Linux kernel, according to GPZ researcher Jann Horn. It was this type of mistake that Horn found in the Android kernel on the Samsung Galaxy A50. But as he notes, what Samsung did is pretty common among all smartphone vendors. That is, adding code to the Linux kernel code downstream that upstream kernel developers haven't reviewed. Even when these downstream customizations are meant to add security to a device, they also introduce security bugs. Samsung's intended kernel security mitigations introduced a memory corruption bug that Google reported to Samsung in November. It was patched in Samsung's just-released February update for Galaxy phones. The issue affects Samsung's extra security subsystem called PROCA or Process Authenticator. Samsung describes the bug, SVE-2019-16132, as a moderate issue consisting of use-after-free and double-free vulnerabilities in PROCA that allow "possible arbitrary code execution" on some Galaxy devices running Android 9.0 and 10.0. Incidentally, the February update also includes a patch for critical flaw in "TEEGRIS devices", referring to Trusted Execution Environment (TEE) on newer Galaxy phones that contain Samsung's proprietary TEE operating system. The Galaxy S10 is among TEEGRIS devices. But Horn's new blogpost is focused on efforts in Android to reduce the security impact of vendors adding unique code to the kernel. "Android has been reducing the security impact of such code by locking down which processes have access to device drivers, which are often vendor-specific," explains Horn. An example is that newer Android phones access hardware through dedicated helper processes, collectively known as the Hardware Abstraction Layer (HAL) in Android. But Horn says vendors modifying how core parts of the Linux kernel work undermines efforts to "lock down the attack surface". Instead, he suggests handset makers use direct hardware access features already supported in Linux, rather than customizing Linux kernel code. Horn says some of the custom features that Samsung added are "unnecessary" and wouldn't affect the device if they were removed. He speculated that PROCA is meant to restrict an attacker who has already gained read and write access on the kernel. But he reckons Samsung could be more efficient by directing engineering resources to preventing an attacker from getting this access in the first place. "I believe that device-specific kernel modifications would be better off either being upstreamed or moved into userspace drivers, where they can be implemented in safer programming languages and/or sandboxed, and at the same time won't complicate updates to newer kernel releases," explained Horn. Source
  3. Google outlines plans for mainline Linux kernel support in Android Google wants less forking, more modularization for Android's Linux kernel. First image of article image gallery. Please visit the source link to see all images. It seems like Google is working hard to update and upstream the Linux kernel that sits at the heart of every Android phone. The company was a big participant in this year's Linux Plumbers Conference, a yearly meeting of the top Linux developers, and Google spent a lot of time talking about getting Android to work with a generic Linux kernel instead of the highly-customized version it uses now. It even showed an Android phone running a mainline Linux kernel. But first, some background on Android's current kernel mess.Currently, three major forks happen in between the "mainline" Linux kernel and a shipping Android device (note that "mainline" here has no relation to Google's own "Project Mainline"). First, Google takes the an LTS (Long Term Support) Linux kernel and turns it into the "Android Common kernel"—the Linux kernel with all the Android OS-specific patches applied. Android Common is shipped to the SoC vendor (usually Qualcomm) where it gets its first round of hardware-specific additions, first focusing on a particular model of SoC. This "SoC Kernel" then gets sent to a device manufacturer for even more hardware-specific code that supports every other piece of hardware, like the display, camera, speakers, usb ports, and any extra hardware. This is the "Device Kernel," and it's what actually ships on a device. This is an extremely long journey that results in every device shipping millions of lines of out-of-tree kernel code. Every shipping device kernel is different and device specific—basically no device kernel from one phone will work on another phone. The mainline kernel version for a device is locked in at the beginning of an SoC's initial development, so it's typical for a brand-new device to ship with a Linux kernel that is two years old. Even Google's latest and, uh, greatest device, the Pixel 4, shipped in October 2019 with Linux kernel 4.14, an LTS release from November 2017. It will be stuck on kernel 4.14 forever, too. Android devices do not get kernel updates, probably thanks to the incredible amount of work needed to produce just a single device kernel, and the chain of companies that would need to cooperate to do it. Thanks to kernel updates never happening, this means every new release of Android usually has to support the last three years of LTS kernel releases (the minimum for Android 10 is 4.9, a 2016 release). Google's commitments to support older versions of Android with security patches means the company is still supporting kernel 3.18, which is five years old now. Google's band-aid solution for this so far has been to team up with the Linux community and support mainline Linux LTS releases for longer, and they're now up to six years of support. Last year, at Linux Plumbers Conference 2018, Google announced its initial investigation into bringing the Android kernel closer to mainline Linux. This year it shared a bit more detail on its progress so far, but it's definitely still a work in progress. "Today, we don't know what it takes to be added to the kernel to run on a [specific] Android device," Android Kernel Team lead Sandeep Patil told the group at LPC 2019. "We know what it takes to run Android but not necessarily on any given hardware. So our goal is to basically find all of that out, then upstream it and try to be as close to mainline as possible." Google says that "compared to [Linux] LTS (4.14.0), the Android common kernel has 355 changes, 32,266 insertions, and 1,546 deletions (as of February 2018)." Progress is being made here every year, but keep in mind there are still two more forks after this. Google While there is still work to be done for getting the wider Android ecosystem and myriad device configurations on board, for specific models of device, things seem to be progressing well. Tom Gall, the director of the Linaro Consumer Group, showed off a Xiaomi Poco F1 actually running the mainline Linux kernel under the usual Android OS. Gall told the audience "There are major, major props to be given to the Google Kernel Team in particular for getting their code upstream so that we can boot devices with a mainline kernel." Along with the plan to upstream as much kernel code as possible, Google is also looking to replicate its earlier work at modularizing Android, Project Treble. Just as Treble created a stable interface between the Android OS and the device HALs (Hardware Abstraction Layers), Google's proposal for bringing Android closer to mainline Linux (How is there not a silly "project" name for this yet?) involves stabilizing Linux's in-kernel ABI and having a stable interface for the Linux kernel and hardware vendors to write to. Google wants to decouple the Linux kernel from its hardware support. The Linux community has been against the idea of a stable interface for some time, with the suggestion that if you want the ability to quickly update a kernel, open source your drivers and get them in the main kernel tree, where any changes will be taken care of for you. Open sourcing drivers is an absolute deal breaker for many hardware companies, though, and no amount of advocacy or product degradation is going to change that. The fact that Google is stuck supporting five-year-old versions of Linux shows this model isn't working. Enlarge / Instead of a bunch of forks, Google imagines the new kernel as a series of modules. Matthias Männich Matthias Männich, a senior software engineer at Google, displayed a diagram labeled "Android Next Generation" that showed what this proposed kernel architecture would look like. Google would ship a "Generic Kernel Image (GKI)" along with several "GKI Modules" that would expose this stable API/ABI in the kernel. The hardware-specific drivers (probably closed source drivers) would be loaded as kernel modules. So instead of forks on top of forks, everything gets modularized into additive packages. For now, Google is only proposing that the in-kernel ABI be stable for a single LTS version. So this wouldn't allow devices to upgrade from one version of the Linux kernel to another—it would just allow for a single generalized kernel image to work across multiple devices, instead of the device-specific kernel forks we have today. It would definitely allow for easier security updates, and hopefully it would get new LTS releases to market faster. We would be premature if we said this is a change that is definitely coming to Android and the Linux kernel. There are still tons of problems to solve (both technically and politically, I would imagine), and some of these talks were more like public brainstorming sessions between the Google presenters and the Linux engineers (many of which were also Googlers!) in the audience. This is the second year in a row Google has showed up to LPC and pitched the idea of using the mainline Linux kernel in Android, though, and with Android being of the biggest Linux sort-of-distros on Earth, it makes sense for everyone to work together, right? "We have miles to go, and we know that, but we've come along from where we started," Patil told the crowd. "Last year, I talked it into existence. This time, we actually have problems to discuss." Listing image by Eric Adeleye / Flickr Source: Google outlines plans for mainline Linux kernel support in Android (Ars Technica) (To view the article's image gallery, please visit the above link)
  4. Torvalds seems bullish on getting Paragon's project across the line eventually. Enlarge / Your hard drives and SSDs aren't any better than the filesystem you format them with. Paragon's ntfs3 driver combines decent performance with a fully featured implementation—a combination that neither Linux in-kernel ntfs nor FUSE-mounted ntfs-3g can claim both halves of. dublinmark / Getty Images In March of last year, proprietary filesystem vendor Paragon Software unleashed a stream of anti-open source FUD about a Samsung-derived exFAT implementation headed into the Linux kernel. Several months later, Paragon seemed to have seen the error of its ways and began the arduous process of getting its own implementation of Microsoft's NTFS (the default filesystem for all Windows machines) into the kernel as well. Although Paragon is still clearly struggling to get its processes and practices aligned to open source-friendly ones, Linux kernel BDFL Linus Torvalds seems to have taken a personal interest in the process. After nearly a year of effort by Paragon, Torvalds continues to gently nudge both it and skeptical Linux devs in order to keep the project moving forward. Why Paragon? To those familiar with daily Linux use, the utility of Paragon's version of NTFS might not be immediately obvious. The Linux kernel already has one implementation of NTFS, and most distributions make it incredibly easy to install and use another FUSE-based implementation (ntfs-3g) beyond that. Both existing implementations have problems, however. The in-kernel implementation of NTFS is extremely old, poorly maintained, and should only be used read-only. As a result, most people who actually need to mount NTFS filesystems on Linux use the ntfs-3g driver instead. Ntfs-3g is in reasonably good shape—it's much newer than the in-kernel ntfs implementation, and as Linux filesystem guru Ted Ts'o points out, it actually passes more automated filesystem tests than Paragon's own ntfs3 does. Unfortunately, due to operating in userspace rather than in-kernel, ntfs-3g's performance is abysmal. In Ts'o's testing, Paragon's ntfs3 completed automated testing in 8,106 seconds—but the FUSE-based ntfs-3g required a whopping 34,783 seconds. Bugs and performance aside, ongoing maintenance is a key aspect to Paragon's ntfs3 making it in-kernel. Torvalds opined that "Paragon should just make a pull request for [ntfs3]"—but he did so after noting that the code should get OKs from current maintainers and that Paragon itself should maintain the code going forward. (Paragon developer Konstantin Komarov quickly replied that the company intended to continue maintaining the code, once accepted.) Why not Paragon? Although Torvalds himself seems positive about getting Paragon's ntfs3 driver mainlined, as do several other users and developers, there are still some concerns about getting Paragon and its workflow properly integrated into the kernel dev community and up to that community's standards. Ted Ts'o—core maintainer of Linux's ext3/ext4 filesystems, and the e2fsprogs userspace utilities used to manage them—seems to be the most critical. In addition to the slightly higher number of failed automated tests he found in Paragon's code, he notes other issues such as whole-system deadlocks that pop up if ntfs3 is stressed too hard. (This is an issue that we have heard over the years from people who've purchased Paragon's ntfs3, as well.) Ts'o also raises questions about maintenance and communication, saying, "I'd feel better if *someone* at Paragon Software responded to Darrick [Wong] and my queries about their quality assurance, and/or made commitments that they would at least *try* to fix the problems that about 5 minutes of testing using fstests turned up trivially." Fellow developer Darrick Wong added that he wants to make sure Paragon was invested in maintenance moving forward, so that ntfs3 wouldn't "become one of the shabby Linux filesystem drivers, like [the current in-kernel ntfs]." The path forward Despite skepticism from Ts'o and Wong, we broadly expect that inclusion of Paragon's ntfs3 will happen eventually. The company has worked for a year so far to take its code from 27,000 lines tossed over the wall into a Linux-ready patch set—and although primary developer Komarov may not have always replied as quickly or thoroughly as Ts'o and Wong prefer, he does continue to respond. For his own part, Torvalds seems determined to find a performant, modern, maintainable replacement for the ancient (2001-era) and seldom-used ntfs implementation in the kernel now. As long as Paragon remains willing to keep playing, it seems likely to get there eventually—perhaps even in time for the 5.15 kernel. Paragon is working to get its ntfs3 filesystem into the Linux kernel
  5. New Linux kernel "lockdown" module to limit high-privileged users -- even root -- from tampering with some kernel functionality. After years of countless reviews, discussions, and code rewrites, Linus Torvalds approved on Saturday a new security feature for the Linux kernel, named "lockdown." The new feature will ship as a LSM (Linux Security Module) in the soon-to-be-released Linux kernel 5.4 branch, where it will be turned off by default; usage being optional due to the risk of breaking existing systems. Putting a leash on the root account The new feature's primary function will be to strengthen the divide between userland processes and kernel code by preventing even the root account from interacting with kernel code -- something that it's been able to do, by design, until now. When enabled, the new "lockdown" feature will restrict some kernel functionality, even for the root user, making it harder for compromised root accounts to compromise the rest of the OS. "The lockdown module is intended to allow for kernels to be locked down early in [the] boot [process]," said Matthew Garrett, the Google engineer who proposed the feature a few years back. "When enabled, various pieces of kernel functionality are restricted," said Linus Torvalds, Linux kernel creator, and the one who put the final stamp of approval on the module yesterday. This includes restricting access to kernel features that may allow arbitrary code execution via code supplied by userland processes; blocking processes from writing or reading /dev/mem and /dev/kmem memory; block access to opening /dev/port to prevent raw port access; enforcing kernel module signatures; and many more others, detailed here. Two lockdown modes The new module will support two lockdown modes, namely "integrity" and "confidentiality." Each is unique, and restricts access to different kernel functionality. "If set to integrity, kernel features that allow userland to modify the running kernel are disabled," said Torvalds. "If set to confidentiality, kernel features that allow userland to extract confidential information from the kernel are also disabled." If necessary, additional lockdown modes can also be added on top, but this will require an external patch, on top of the lockdown LSM. A long time coming Work on the kernel lockdown feature started in the early 2010s, and was spearheaded by now-Google engineer, Matthew Garrett. The idea behind the kernel lockdown feature was to create a security mechanism to prevent users with elevetated permissions -- even the vaunted "root" account -- from tampering with the kernel's code. Back then, even if Linux systems were employing secure boot mechanisms, there were still ways that malware could abuse drivers, root accounts, and user accounts with special elevated privileges to tamper with the kernel's code, and by doing so, gain boot persistence and a permanent foothold on infected systems. Many security experts have asked across the years that the Linux kernel support a more potent way to restrict the root account and improve kernel security. The main opposition came from Torvalds, who was one of the feature's most ardent critics, especially in its early days. As a result, many Linux distros, such as Red Hat, developed their own Linux kernel patches that added a lockdown feature on top of the mainline kernel. However, the two parties reached a middleground in 2018, and work progressed on the lockdown feature this year. "The majority of mainstream distributions have been carrying variants of this patchset for many years now, so there's value in providing a doesn't meet every distribution requirement, but gets us much closer to not requiring external patches," Torvalds said yesterday. "Applications that rely on low-level access to either hardware or the kernel may cease working as a result - therefore this should not be enabled without appropriate evaluation beforehand." The news that a kernel lockdown module has been finally approved has been greeted positively in the Linux and cyber-security communities. Source
  6. 5 ways Linux changed our lives and we didn’t even know it Aug. 25, 1991, a 21-year-old Finnish student named Linus Torvalds announced to the internet that he was working on a project he said was “just a hobby, won’t be big and professional.” Less than one month later, Torvalds released the Linux kernel to the public. The world hasn’t been the same since. From how we interact with one another on a daily basis to preparing for the future of the human race, Linux is integral to our technological development. To commemorate the nearly 30 years that Linux has been available, we compiled a shortlist of ways Linux has fundamentally changed our lives. Linux and the internet It’s hard to imagine life without the internet. It’s how we communicate, how we learn, and how we hear about the outside world. Needless to say, the internet is one of the single most important inventions in human history, and today it is mostly powered by Linux. Linux-based operating systems are the number-one choice for servers around the world. Servers are computers that keep an individual’s or a company’s digital services, products, and websites connected to the internet. Whether its Google, Twitter, Facebook, Amazon, or even Apple, most companies choose to use some form of Linux to run their businesses. It’s hard to pin down exactly how many servers run Linux. As of 2015, web analytics and market share company W3Cook estimated that as many as 96.4% of all servers ran Linux or one of its derivatives. No matter the exact number, it’s safe to say that the kernel nearly powers the entire web. The computer in your pocket, on your wrist, and in your TV Not only is the internet essentially powered by Linux, but it changed how we most often access the internet: our Android smartphones. Founded in Oct. 2003, a team of developers forked Android from Linux to run on digital cameras. Nearly 16 years later, it’s the single most popular operating system in the world, running on more than 2 billion devices. Even Chrome OS, Android TV, and Wear OS are all forked from Linux. Google isn’t the only one to do this either. Samsung’s own in-house operating system, Tizen, is forked from Linux as well, and it’s is even backed by The Linux Foundation. It is incredibly likely that at least one of the devices you use everyday runs some form of Linux. So, the next time you think how amazing your new Samsung Galaxy Note 10 or OnePlus 7 Pro is, thank Linux it exists. Linux and the automotive industry Smartphones and similar devices have been commonplace for some time now. Unlike them, electric and smart cars are newer to the consumer electronics industry. Just like other devices, most of these offerings run Linux, and I’m not just talking about Android Auto. Tesla uses Linux as the software in every one of its products. In 2018, ZDNet reported that Tesla even began releasing that code to the public, as is custom for all open-source software. Tesla isn’t alone in this either. Many major car companies like Honda, Mazda, Volkswagen, and Mercedes-Benz rely on Automotive Grade Linux to deliver their connected car functionality to customers. Wide adoption of standardized connected functionality would be nearly impossible without the kernel, making it the future of the industry. Supercomputers and other research tools Supercomputers date all the way back to 1960. But, no matter how long they have been around, supercomputers are more powerful now than ever before, and nearly all of them use Linux. Supercomputers are the most powerful technological devices on the planet. Scientists use them to study quantum mechanics, weather forecasting, and molecular research. And, as of 2017, every one of the top 500 supercomputers in the world runs on Linux. Alongside these supercomputers, companies like CERN, a European nuclear research organization, run Linux for its research. In 2009, CERN launched the world’s largest and most powerful particle accelerator. This Linux powered machine is called the Large Hadron Collider, and CERN uses it to study matter, energy, and how the universe began. NASA and SpaceX Linux has even changed how we study the universe at large. For similar reasons cars and supercomputers use Linux, NASA uses it for most of the computers aboard the International Space Station. Astronauts use these computers to carry out research and perform tasks related to their assignments. But NASA isn’t the only galaxy studying organization using Linux. The privately-owned SpaceX also uses Linux for many of its projects. In 2017, SpaceX sent a Linux-powered supercomputer developed by HP to space and, according to an AMA on Reddit, even the Dragon and Falcon 9 run Linux. In the end, the kernel has changed us more than we will ever know. It is crucial to our communication, scientific research, and technological development. And it does all of that while simultaneously allowing the internet to provide us an infinite amount of dank memes. Without it, there would be no science or social human development, and we would all still be cave-people. So thank you, Linux, for bringing us together both physically and digitally over the last 28 years. We look forward to the future and how you will change our lives even more. Source
  7. Currently, at release candidate 3, the Linux 5.2 kernel is coming soon and promises to offer quite a host of impressive new features and improvements. Let’s take a look at some of the highlights. New Features Fieldbus Subsystem One of the new features that should excite anyone who deals with automated industrial systems is the introduction of the new Fieldbus Subsystem. Fieldbus is crucial in connecting different systems in industrial environments, and with this addition, the Linux kernel can now natively communicate with field instruments (such as those used in manufacturing plants) as a part of a control system. U2F Zero Driver Two-Factor authentication will be getting a bump in the 5.2 kernel, thanks to the new U2F Zero driver. This new drive will add support for USB-based U2F tokens, which can work with online 2-Factor Authentication. The U2F driver will also provide hardware-based Random Number Generator (RNG) capability. NVIDIA “AltMode” A new open-source NVIDIA driver is coming that handles VirtualLink devices that make use of the USB-C connector-based RTX Turing graphics cards. As for the VirtualLink devices, there is no certainty as to which device might first be supported, but (according to Phoronix) Valve might possibly be among the first VR Head Mounted Display to be supported. Generic Counter Interface Although not as sexy as some of the other new features coming to the kernel, there’s a new Generic Counter Interface that will allow counter devices and drivers to reuse common code (rather than having to use redundant code for each driver). What is a Counter Device? According to the diff for generic-counter.rst: Counter devices are prevalent within a diverse spectrum of industries. The ubiquitous presence of these devices necessitates a common interface and standard of interaction and exposure. This driver API attempts to resolve the issue of duplicate code found among existing counter device drivers by introducing a generic counter interface for consumption. The Generic Counter interface enables drivers to support and expose a common set of components and functionality present in counter devices. Null TTY Driver The NULL TTY driver is necessary when a console driver is needed, but not available. When this occurs, the NULL TTY driver will provide a dummy console so that all writes are simply discarded. One practical application for such a driver would be embedded controllers. Daktronics Drivers Daktronics is a global manufacturer of massive displays (such as those used for scoreboards). These particular drivers have been out of the Linux tree since the 2.x days, but are finally making their way back. Once these drivers return, they will be cleaned up for production usage. Anyone who works with Daktronics displays should be quite excited about this niche-y news. Intel Comet Lake Intel Comet Lake is the successor to Coffee Lake and is rumored to top out at 10 physical cores. These new processors should see a mid-year launch and (according to rumors, might require a new motherboard/socket). Comet Lake should be well supported in kernel 5.2. Improvements There are a number of improvements coming to 5.2. Let’s take a look at some of these highlights. Logitech Wireless Devices If you own any sort of Logitech wireless device (such as a keyboard or mouse), you will see significant improvement, such as better battery monitoring and per-device key-mapping. Previous iterations of the kernel supported Logitech wireless devices via generic HID emulation, but with 5.2 the per-device keymapping will make it possible to better support individual wireless devices. EXT4 Case-Insensitive Filenames/Folders Linux has, since inception, been case sensitive. However, with 5.2, the EXT4 filesystem will allow, on a per-directory basis, the support of case-insensitive files and folders. These patches have been in development and it seems they are finally ready for mainline support. AMD Ryzen Laptop Improvement For those with laptops that use AMD Ryzen chips, you should see significant improvement with touchscreens and touchpads. AMDGPU Alex Deucher (of AMD) has been working on replacement code to clean up/improve: Power profiles for PowerPlay Dx display bandwidth RAS support with Vega 20 Plane handling with the DC code FreeSync X86 Memory Management Thanks to VMware’s Nadav Amit, the 5.2 kernel should see a 3% performance increase, by reworking a portion of the kernel’s x86 Translation Lookaside Buffer (TLB)/memory management code. Other Improvements/Features Kernel 5.2 should see a number of other improvements for the likes of: Intel Icelake ARM Mali graphics hardware Fixes for the Nouveau open source graphics driver Improved Thunderbolt support (for older Apple hardware) Intel i40e Dynamic Device personalization ASpeed AST2500 SoC driver Zap shader support for Qualcomm Adreno 600 GCC 9 live-patching Be on the Lookout When the 5.2 kernel releases, there should be serious improvements for nearly any type of user. The release date has not been set, but kernel 5.2 should find its way into the wild sometime in late 2019 or early 2020. Be on the lookout for this to hit your systems in the coming months. Source
  8. Linux, the most important software program in the world, has a major weak point: Poor testing. That's changing, and one big sign of that is KernelCI, an automated testing framework for the Linux kernel, is now a Linux Foundation project. At the recent Linux Kernel Plumbers get-together in Lisbon, Portugal, one of the hottest topics was how to bring better and automated testing to the Linux kernel. There, the top Linux developers united their efforts behind one testing framework: KernelCI. Now, at Open Source Summit Europe in Lyon, France, to help give KernelCI the resources it needs to be successful, it became a Linux Foundation project. Here's how it works: As you probably know the Linux kernel is developed by a large, collaborative open-source community, which works through the Linux Kernel Mailing List (LKML). You can't argue with the method. But Linux kernel testing is fragmented -- since it is largely done in private silos without enough collaboration on the testing software or methodologies. Part of the problem is how patches are done with Linux's mailing lists. Russell Currey, a Linux kernel developer, recently explained: "[Unlike a project based solely on GitHub or GitLab] where a pull request contains all of the information needed to merge a group of changes; an email containing, say, patch 7/10 lacks that context. It is nearly impossible to tell from an email message whether a patch series has been merged, rejected, or superseded. In general, mailing lists simply do not carry the same level of metadata as contemporary project-hosting sites, and that makes the CI [Continuous Integration] problem harder." The specific problem? KernelCI was designed in the beginning to address testing Linux on a wide variety of hardware. Until now, when Linux patches were tested, they were done on developers' own machines. That meant you could be sure Linux would run as expected on mainstream hardware. But if your hardware wasn't that popular… Well, chances are it wasn't tested. As Greg Kroah-Hartman, the maintainer of the Linux stable branch, explained: "Linux runs everywhere and on so many different pieces of hardware and but the testing on that hardware was very minimal. Most people, were just testing on the few things that they cared about. So we want to test it on as much hardware as we could to make sure that we're actually supporting all the hardware that we claim we're supporting." Going forward, though, KernelCI will do more than just far more than just hardware testing. Kevin Hilman, KernelCI's co-founder, and a senior engineer at BayLibre, explained in his Open Source Summit Europe keynote: "We got together at Linux Plumbers. One of the big problems that we have now is we have six or seven different code testing projects that were sending kernel developers and maintainers reports This was getting really annoying so we came together and said, 'pick one and use it as a framework' and so we've agreed on KernelCI so, we're all gonna work together, not to duplicate our efforts and results." By consolidating these testing projects and seeking common ground, the new KernelCI will also help address the problem of dealing with patches within the LKML. So, while there will still be many Linux testing suites, no longer will they stand alone without any real coordination between them. KernelCI's goal going forward will be not just to test a wider variety of devices but to unify all upstream Linux kernel testing efforts. Together, this will provide a single place to store, view, compare and track testing results. "Provided how crucial Linux has become to society, achieving comprehensive test coverage of the Linux kernel is essential. By applying the open-source philosophy to testing, the KernelCI distributed architecture enables the whole kernel community to collaborate around a single upstream CI system," said Guy Lunardi, VP of business development at open-source consultancy and KernelCI contributor Collabora. Put it all together, and KernelCI will improve Linux Long Term Support (LTS) kernel testing and validation; consolidate existing testing initiatives; improve Linux's overall security; and increase the pool of tested hardware. This, in turn, will improve the quality, stability. and long-term maintenance of the Linux kernel. And, that is a very good thing. Source
  9. Happening in plain sight with Proton, WSL and Edge-for-Linux, says open source advocate Open-source software advocate Eric S Raymond has penned an argument that the triumph of Linux on the desktop is imminent because Microsoft will soon tire of Windows. Raymond's argument, posted to his blog late last week, kicked off with some frank admiration for Windows Subsystem For Linux, the tech that lets Linux binaries run under Windows. He noted that Microsoft is making kernel contributions just to improve WSL. Raymond is also an admirer of software called "Proton", an emulation layer that allows Windows games distributed by Steam to run under Linux. Raymond rated Proton as "not perfect yet, but it's getting close". His next item of note was Microsoft's imminent release of its Edge browser for Linux. That collection of ingredients, he argued, will collide with the fact that Azure is now Microsoft's cash cow while the declining PC market means that over time Microsoft will be less inclined to invest in Windows 10. "Looked at from the point of view of cold-blooded profit maximization, this means continuing Windows development is a thing Microsoft would prefer not to be doing," he wrote. "Instead, they'd do better putting more capital investment into Azure – which is widely rumored to be running more Linux instances than Windows these days." Raymond next imagined he was a Microsoft strategist seeking maximum future profits and came to the following conclusion: Microsoft Windows becomes a Proton-like emulation layer over a Linux kernel, with the layer getting thinner over time as more of the support lands in the mainline kernel sources. The economic motive is that Microsoft sheds an ever-larger fraction of its development costs as less and less has to be done in-house. If you think this is fantasy, think again. The best evidence that it's already the plan is that Microsoft has already ported Edge to run under Linux. There is only one way that makes any sense, and that is as a trial run for freeing the rest of the Windows utility suite from depending on any emulation layer. Over time, Raymond reckoned, Windows emulation would only be present to handle "games and other legacy third-party software". And eventually Microsoft will get so focused on Azure, and so uninterested in spending money on Windows, that it will ditch even the Windows emulation layer. "Third-party software providers stop shipping Windows binaries in favor of ELF binaries with a pure Linux API … and Linux finally wins the desktop wars, not by displacing Windows but by co-opting it." The end. Source
×
×
  • Create New...