Install Linux alongside windows option is not available

I want to install Linux Mint 19.3 on my laptop but Install Linux alongside windows option is not available when I try to install Linux Mint. How can I make Install Linux alongside windows option available in the setup? I’ve also created an NTFS partition on my HDD. submitted by /u/harshdaniel66356 [link] [comments]

System-wide aliases

I have my aliases set in my ~/.bash_aliases, it works like a charm in terminal. Howerver, it doesn’t do the same with Application Finder (Alt + F2) or Whisker Menu. Anybody know how to fix this? Thanks. submitted by /u/huysmithz [link] [comments]

Octarine Adds 2 Open Source Projects to Secure Kubernetes

Octarine announced today it has launched two open source projects intended to enhance Kubernetes security. The first project is kube-scan, a workload and assessment tool that scans Kubernetes configurations and settings to identify and rank potential vulnerabilities in applications in minutes. The second project is a Kubernetes Common Configuration Scoring System (KCCSS), a framework for rating security risks involving misconfigurations.
Julian Sobrier, head of product for Octarine, said the projects are extensions of the namesake cybersecurity framework the company created based on a service mesh for Kubernetes clusters. The Octarine service mesh not only segments network and application traffic all the way up through Layer 7 running on Kubernetes clusters, but it also acts as an inspection engine that employs machine learning algorithms to identify anomalous traffic, Sobrier says.
[Source: Container Journal]
The post Octarine Adds 2 Open Source Projects to Secure Kubernetes appeared first on Linux.com.

Intel and Softbank Beware. Open Source Is Coming to the Chip Business

After revolutionizing software, the open-source movement is threatening to do same to the chip industry. Big technology companies have begun dabbling with RISC-V, which replaces proprietary know-how in a key part of the chip design process with a free standard that anyone can use. While it’s early days, this could create a new crop of processors that compete with Intel Corp. products and whittle away at the licensing business of Arm Holdings Plc.
In December, about 2,000 people packed into a Silicon Valley conference to learn about RISC-V, a new set of instructions that control how software communicates with semiconductors.
[Source: Bloomberg]
The post Intel and Softbank Beware. Open Source Is Coming to the Chip Business appeared first on Linux.com.

Red Hat Extends Runtimes Middleware Portfolio

Red Hat has made available the latest instance of Red Hat Runtimes, a suite of lightweight open source components and frameworks that makes it easier to discover the middleware most appropriate for building a specific type of application.
James Falkner, product marketing director for Runtimes at Red Hat, said as organizations embrace cloud-native application architectures based on microservices it’s become increasingly challenging to determine what middleware to deploy optimally and where. Red Hat Runtimes not only makes it easier to navigate all those options, Falkner said, but all the components and frameworks are certified to be pre-integrated.
[Source: DevOps.com]
The post Red Hat Extends Runtimes Middleware Portfolio appeared first on Linux.com.

Wine 5.0 Officially Released with Multi-Monitor and Vulkan 1.1 Support, More

Big news today for Linux gamers and ex-Windows users as the final release of the Wine 5.0 software is now officially available for download with numerous new features and improvements.
After being in development for more than one year, Wine 5.0 is finally here with a lot of enhancements, starting with support for multi-monitor configurations, the reimplementation of the XAudio2 low-level audio API, Vulkan 1.1.126 support, as well as built-in modules in PE (Portable Executable) format. “This release is dedicated to the memory of Józef Kucia, who passed away in August 2019 at the young age of 30. Józef was a major contributor to Wine’s Direct3D implementation, and the lead developer of the vkd3d project. His skills and his kindness are sorely missed by all of us,” reads today’s announcement.
[Source: Softpedia]
The post Wine 5.0 Officially Released with Multi-Monitor and Vulkan 1.1 Support, More appeared first on Linux.com.

Setting up passwordless Linux logins using public/private keys

Setting up an account on a Linux system that allows you to log in or run commands remotely without a password isn’t all that hard, but there are some tedious details that you need to get right if you want it to work. In this post, we’re going to run through the process and then show a script that can help manage the details.
Once set up, passwordless access is especially useful if you want to run ssh commands within a script, especially one that you might want to schedule to run automatically.
[Source: NetworkWorld]
The post Setting up passwordless Linux logins using public/private keys appeared first on Linux.com.

9 favorite open source tools for Node.js developers

Node.js is a cross-platform, open source runtime environment for executing JavaScript code outside of the browser. It is also a preferred runtime environment built on Chrome’s JavaScript runtime and is mainly used for building fast, scalable, and efficient network applications.
For 49% of all developers, Node.js is at the top of the pyramid when it comes to front-end and back-end development. Take a look at this list of 9 of the best open source tools for simplifying Node.js development.
[Source: Opensource.com]
The post 9 favorite open source tools for Node.js developers appeared first on Linux.com.

Nextcloud Hub takes on Google Docs and Office 365

For years, Nextcloud has set the standard for run-your-own Infrastructure as a Service (IaaS) private clouds. Now with the open-source Nextcloud Hub, it’s taking on Software-as-a-Service (SaaS) office programs such as Google Docs and Office 365.
Nextcloud has long offered Collabora Online Office, a SaaS version of the open-source LibreOffice office suite to its customers. Hub, though, is a new product. It combines Nextcloud’s outstanding cloud file system, Nextcloud Files, with Ascensio System’s ONLYOFFICE. Together they are a complete productivity office suite with word processing, spreadsheets, presentation software document management, project management, customer relationship management (CRM), calendar, and mail.
[Source: ZDNet]
The post Nextcloud Hub takes on Google Docs and Office 365 appeared first on Linux.com.

MNT Reform, an Open Source Laptop, Expected to Hit Crowd Supply in February

MNT Reform is a laptop that aims to utilize all opensource materials, everything from firmware, hardware and software. This device is expected to hit Crowdsupply in February and aims to offer a very modular design, having easily replaceable parts which are a combination of both standard components and 3D printed parts.
Modern Hardware, But All Open Source!
The MNT Reform laptop is expected to come with 4 GB of DDR3 memory, an NVMe slot for SSD, a port for Gigabit Ethernet. This offers decent expandability, all featured in a fully anodized CNC-milled aluminum case.
[Source: wccftech.com]
The post MNT Reform, an Open Source Laptop, Expected to Hit Crowd Supply in February appeared first on Linux.com.

New Linux System Call Proposed To Let User-Space Pin Themselves To Specific CPU Cores

A “pin_on_cpu” system call has been proposed for the Linux kernel as a new means of letting user-space threads pin themselves to specific CPU cores. User-space processes requesting to be run on specific CPU cores can already e done by the likes of Linux’s sched_setaffinity to get/set the CPU affinity mask while pin_on_cpu would be a new and simpler way. The current calls also run into issues around CPU hot-plugging, as explained further in the RFC mailing list post.
Setting the CPU core to run on with the proposed pin_on_cpu system call would still require that the specific CPU be part of the allowed CPU mask.
[Source: Phoronix]
The post New Linux System Call Proposed To Let User-Space Pin Themselves To Specific CPU Cores appeared first on Linux.com.

Adobe products on Gnu Linux

Is there any way to get some sort of agreement between major Linux companies to bring Adobe to Linux? After all these years and still we don’t any hint of having their proprietary software running on Linux, why? If small companies need Adobe software, and need to cut expenses, they could have Adobe products while running Linux (cutting on windows licenses or Apple hardware). Adobe would still get money, companies would benefit from only having to pay Adobe and nothing else, this is like a serious win win situation. So why isn’t it happening? submitted by /u/_yggdrsl_ [link] [comments]

Popular 32-BIT desktop flavors?

I have an old Centrino laptop that I’m giving my son to use for writing and the CPU only supports 32-bit. I know Ubuntu no longer supports 32-bit (after 16.04 I believe). I’m looking for a new/supported distro for this older laptop (1GB RAM, 4:3 screen). Thanks. submitted by /u/rpeters83 [link] [comments]

My linux dual boot with windows literally disapeared

After a windows update my dual boot is gone, i have both in the same drive with separated partitions, and the linux partition still exists as windows detects the drive as only 500 gb (its actually 1tb). After some digging i found a program called EasyBCD but it doesnt work for me either. Anyone has had any simillar problems? submitted by /u/diale13 [link] [comments]

Mediterranean Shipping Company on Azure Site Recovery

Today’s Q&A post covers an interview between Siddharth Deekshit, Program Manager, Microsoft Azure Site Recovery engineering and Quentin Drion, IT Director of Infrastructure and Operations, MSC. MSC is a global shipping and logistics business, our conversation focused on their organization’s journey with Azure Site Recovery (ASR). To learn more about achieving resilience in Azure, refer to this whitepaper.

I wanted to start by understanding the transformation journey that MSC is going through, including consolidating on Azure. Can you talk about how Azure is helping you run your business today?

We are a shipping line, so we move containers worldwide. Over the years, we have developed our own software to manage our core business. We have a different set of software for small, medium, and large entities, which were running on-premises. That meant we had to maintain a lot of on-premises resources to support all these business applications. A decision was taken a few years ago to consolidate all these business workloads inside Azure regardless of the size of the entity. When we are migrating, we turn off what we have on-premises and then start using software hosted in Azure and provide it as a service for our subsidiaries. This new design is managed in a centralized manner by an internal IT team.

That’s fantastic. Consolidation is a big benefit of using Azure. Apart from that, what other benefits do you see of moving to Azure?

For us, automation is a big one that is a huge improvement, the capabilities in terms of API in the integration and automation that we can have with Azure allows us to deploy environments in a matter of hours where before that it took much, much longer as we had to order the hardware, set it up, and then configure. Now we no longer need to worry about the set up as well as hardware support, and warranties. The environment is all virtualized and we can, of course, provide the same level of recovery point objective (RPO), recovery time objective (RTO), and security to all the entities that we have worldwide.

Speaking of RTO and RPO, let’s talk a little bit about Site Recovery. Can you tell me what life was like before using Site Recovery?

Actually, when we started migrating workloads, we had a much more traditional approach, in the sense that we were doing primary production workloads in one Azure region, and we were setting up and managing a complete disaster recovery infrastructure in another region. So the traditional on-premises data center approach was really how we started with disaster recovery (DR) on Azure, but then we spent the time to study what Site Recovery could provide us. Based on the findings and some testing that we performed, we decided to change the implementation that we had in place for two to three years and switch to Site Recovery, ultimately to reduce our cost significantly, since we no longer have to keep our DR Azure Virtual Machines running in another region. In terms of management, it’s also easier for us. For traditional workloads, we have better RPO and RTO than we saw with our previous approach. So we’ve seen great benefits across the board.

That’s great to know. What were you most skeptical about when it came to using Site Recovery? You mentioned that your team ran tests, so what convinced you that Site Recovery was the right choice?

It was really based on the tests that we did. Earlier, we were doing a lot of manual work to switch to the DR region, to ensure that domain name system (DNS) settings and other networking settings were appropriate, so there were a lot of constraints. When we tested it compared to this manual way of doing things, Site Recovery worked like magic. The fact that our primary region could fail and that didn’t require us to do a lot was amazing. Our applications could start again in the DR region and we just had to manage the upper layer of the app to ensure that it started correctly. We were cautious about this app restart, not because of the Virtual Machine(s), because we were confident that Site Recovery would work, but because of our database engine. We were positively surprised to see how well Site Recovery works. All our teams were very happy about the solution and they are seeing the added value of moving to this kind of technology for them as operational teams, but also for us in management to be able to save money, because we reduced the number of Virtual Machines that we had that were actually not being used.

Can you talk to me a little bit about your onboarding experience with Site Recovery?

I think we had six or seven major in house developed applications in Azure at that time. We picked one of these applications as a candidate for testing. The test was successful. We then extended to a different set of applications that were in production. There were again no major issues. The only drawback we had was with some large disks. Initially, some of our larger disks were not supported. This was solved quickly and since then it has been, I would say, really straightforward. Based on the success of our testing, we worked to switch all the applications we have on the platform to use Site Recovery for disaster recovery.

Can you give me a sense of what workloads you are running on your Azure Virtual Machines today? How many people leverage the applications running on those Virtual Machines for their day job?

So it’s really core business apps. There is, of course, the main infrastructure underneath, but what we serve is business applications that we have written internally, presented to Citrix frontend in Azure. These applications do container bookings, customer registrations, etc. I mean, we have different workloads associated with the complete process of shipping. In terms of users, we have some applications that are being used by more than 5,000 people, and more and more it’s becoming their primary day-to-day application.

Wow, that’s a ton of usage and I’m glad you trust Site Recovery for your DR needs. Can you tell me a little bit about the architecture of those workloads?

Most of them are Windows-based workloads. The software that gets the most used worldwide is a 3-tier application. We have a database on SQL, a middle-tier server, application server, and also some web frontend servers. But for the new one that we have developed now, it’s based on microservices. There are also some Linux servers being used for specific usage.

Tell me more about your experience with Linux.

Site Recovery works like a charm with Linux workloads. We only had a few mistakes in the beginning, made on our side. We wanted to use a product from Red Hat called Satellite for updates, but we did not realize that we cannot change the way that the Virtual Machines are being managed if you want to use Satellite. It needs to be defined at the beginning otherwise it’s too late. But besides this, the ‘bring your own license’ story works very well and especially with Site Recovery.

Glad to hear that you found it to be a seamless experience. Was there any other aspect of Site Recovery that impressed you, or that you think other organizations should know about?

For me, it’s the capability to be able to perform drills in an easy way. With the more traditional approach, each time that you want to do a complete disaster recovery test, it’s always time and resource-consuming in terms of preparation. With Site Recovery, we did a test a few weeks back on the complete environment and it was really easy to prepare. It was fast to do the switch to the recovery region, and just as easy to bring back the workload to the primary region. So, I mean for me today, it’s really the ease of using Site Recovery.

If you had to do it all over again, what would you do differently on your Site Recovery Journey?

I would start to use it earlier. If we hadn’t gone with the traditional active-passive approach, I think we could have saved time and money for the company. On the other hand, we were in this way confident in the journey. Other than that, I think we wouldn’t have changed much. But what we want to do now, is start looking at Azure Site Recovery services to be able to replicate workloads running on on-premises Virtual Machines in Hyper-V. For those applications that are still not migrated to Azure, we want to at least ensure proper disaster recovery. We also want to replicate some VMware Virtual Machines that we still have as part of our migration journey to Hyper-V. This is what we are looking at.

Do you have any advice for folks for other prospective or current customers of Site Recovery?

One piece of advice that I could share is to suggest starting sooner and if required, smaller. Start using Site Recovery even if it’s on one small app. It will help you see the added value, and that will help you convince the operational teams that there is a lot of value and that they can trust the services that Site Recovery is providing instead of trying to do everything on their own.

That’s excellent advice. Those were all my questions, Quentin. Thanks for sharing your experiences.

Learn more about resilience with Azure. 

MSC Mediterranean Shipping Company on Azure Site Recovery

Today’s Q&A post covers an interview between Siddharth Deekshit, Program Manager, Microsoft Azure Site Recovery engineering and Quentin Drion, IT Director of Infrastructure and Operations, MSC. MSC is a global shipping and logistics business, our conversation focused on their organization’s journey with Azure Site Recovery (ASR). To learn more about achieving resilience in Azure, refer to this whitepaper.

I wanted to start by understanding the transformation journey that MSC is going through, including consolidating on Azure. Can you talk about how Azure is helping you run your business today?

We are a shipping line, so we move containers worldwide. Over the years, we have developed our own software to manage our core business. We have a different set of software for small, medium, and large entities, which were running on-premises. That meant we had to maintain a lot of on-premises resources to support all these business applications. A decision was taken a few years ago to consolidate all these business workloads inside Azure regardless of the size of the entity. When we are migrating, we turn off what we have on-premises and then start using software hosted in Azure and provide it as a service for our subsidiaries. This new design is managed in a centralized manner by an internal IT team.

That’s fantastic. Consolidation is a big benefit of using Azure. Apart from that, what other benefits do you see of moving to Azure?

For us, automation is a big one that is a huge improvement, the capabilities in terms of API in the integration and automation that we can have with Azure allows us to deploy environments in a matter of hours where before that it took much, much longer as we had to order the hardware, set it up, and then configure. Now we no longer need to worry about the set up as well as hardware support, and warranties. The environment is all virtualized and we can, of course, provide the same level of recovery point objective (RPO), recovery time objective (RTO), and security to all the entities that we have worldwide.

Speaking of RTO and RPO, let’s talk a little bit about Site Recovery. Can you tell me what life was like before using Site Recovery?

Actually, when we started migrating workloads, we had a much more traditional approach, in the sense that we were doing primary production workloads in one Azure region, and we were setting up and managing a complete disaster recovery infrastructure in another region. So the traditional on-premises data center approach was really how we started with disaster recovery (DR) on Azure, but then we spent the time to study what Site Recovery could provide us. Based on the findings and some testing that we performed, we decided to change the implementation that we had in place for two to three years and switch to Site Recovery, ultimately to reduce our cost significantly, since we no longer have to keep our DR Azure Virtual Machines running in another region. In terms of management, it’s also easier for us. For traditional workloads, we have better RPO and RTO than we saw with our previous approach. So we’ve seen great benefits across the board.

That’s great to know. What were you most skeptical about when it came to using Site Recovery? You mentioned that your team ran tests, so what convinced you that Site Recovery was the right choice?

It was really based on the tests that we did. Earlier, we were doing a lot of manual work to switch to the DR region, to ensure that domain name system (DNS) settings and other networking settings were appropriate, so there were a lot of constraints. When we tested it compared to this manual way of doing things, Site Recovery worked like magic. The fact that our primary region could fail and that didn’t require us to do a lot was amazing. Our applications could start again in the DR region and we just had to manage the upper layer of the app to ensure that it started correctly. We were cautious about this app restart, not because of the Virtual Machine(s), because we were confident that Site Recovery would work, but because of our database engine. We were positively surprised to see how well Site Recovery works. All our teams were very happy about the solution and they are seeing the added value of moving to this kind of technology for them as operational teams, but also for us in management to be able to save money, because we reduced the number of Virtual Machines that we had that were actually not being used.

Can you talk to me a little bit about your onboarding experience with Site Recovery?

I think we had six or seven major in house developed applications in Azure at that time. We picked one of these applications as a candidate for testing. The test was successful. We then extended to a different set of applications that were in production. There were again no major issues. The only drawback we had was with some large disks. Initially, some of our larger disks were not supported. This was solved quickly and since then it has been, I would say, really straightforward. Based on the success of our testing, we worked to switch all the applications we have on the platform to use Site Recovery for disaster recovery.

Can you give me a sense of what workloads you are running on your Azure Virtual Machines today? How many people leverage the applications running on those Virtual Machines for their day job?

So it’s really core business apps. There is, of course, the main infrastructure underneath, but what we serve is business applications that we have written internally, presented to Citrix frontend in Azure. These applications do container bookings, customer registrations, etc. I mean, we have different workloads associated with the complete process of shipping. In terms of users, we have some applications that are being used by more than 5,000 people, and more and more it’s becoming their primary day-to-day application.

Wow, that’s a ton of usage and I’m glad you trust Site Recovery for your DR needs. Can you tell me a little bit about the architecture of those workloads?

Most of them are Windows-based workloads. The software that gets the most used worldwide is a 3-tier application. We have a database on SQL, a middle-tier server, application server, and also some web frontend servers. But for the new one that we have developed now, it’s based on microservices. There are also some Linux servers being used for specific usage.

Tell me more about your experience with Linux.

Site Recovery works like a charm with Linux workloads. We only had a few mistakes in the beginning, made on our side. We wanted to use a product from Red Hat called Satellite for updates, but we did not realize that we cannot change the way that the Virtual Machines are being managed if you want to use Satellite. It needs to be defined at the beginning otherwise it’s too late. But besides this, the ‘bring your own license’ story works very well and especially with Site Recovery.

Glad to hear that you found it to be a seamless experience. Was there any other aspect of Site Recovery that impressed you, or that you think other organizations should know about?

For me, it’s the capability to be able to perform drills in an easy way. With the more traditional approach, each time that you want to do a complete disaster recovery test, it’s always time and resource-consuming in terms of preparation. With Site Recovery, we did a test a few weeks back on the complete environment and it was really easy to prepare. It was fast to do the switch to the recovery region, and just as easy to bring back the workload to the primary region. So, I mean for me today, it’s really the ease of using Site Recovery.

If you had to do it all over again, what would you do differently on your Site Recovery Journey?

I would start to use it earlier. If we hadn’t gone with the traditional active-passive approach, I think we could have saved time and money for the company. On the other hand, we were in this way confident in the journey. Other than that, I think we wouldn’t have changed much. But what we want to do now, is start looking at Azure Site Recovery services to be able to replicate workloads running on on-premises Virtual Machines in Hyper-V. For those applications that are still not migrated to Azure, we want to at least ensure proper disaster recovery. We also want to replicate some VMware Virtual Machines that we still have as part of our migration journey to Hyper-V. This is what we are looking at.

Do you have any advice for folks for other prospective or current customers of Site Recovery?

One piece of advice that I could share is to suggest starting sooner and if required, smaller. Start using Site Recovery even if it’s on one small app. It will help you see the added value, and that will help you convince the operational teams that there is a lot of value and that they can trust the services that Site Recovery is providing instead of trying to do everything on their own.

That’s excellent advice. Those were all my questions, Quentin. Thanks for sharing your experiences.

Learn more about resilience with Azure. 

Examples of real-world exploits which were mitigated/prevented by SELinux / AppArmor?

One thing I always look into when deploying Linux distributions is whether they ship a configured LSM out of the box (SELinux, or AppArmor mostly). But I’m wondering how important this is, really. I’ve been trying to find examples of real-world (so not purely theoretical) exploits and issues which have cropped up over the years which ended up not being effective against systems with a proper LSM deployed. It seems like SELinux mitigated a container vulnerability [back in 2017](https://www.redhat.com/en/blog/selinux-mitigates-container-vulnerability), and then another one [in 2019](https://www.redhat.com/en/blog/latest-container-exploit-runc-can-be-blocked-selinux). Searching for these examples is quite challenging because I’m just running into vulnerability reports for SELinux and AppArmor themselves, and not examples of vulnerabilities which they were able to mitigate or protect against. Does anyone else have any good examples of this? submitted by /u/PusheenButtons [link] [comments]

Does anyone else enjoy using Nano for a terminal based editor? I think it gets a bad rap.

Out of the box I think it’s weak, but after editing the config file I find it powerful enough for what I like to do. I usually write programs (and almost everything else) in gedit, but I’ve found myself recently enjoying nano as well. I really enjoy the simplicity, and I don’t need the macros or extra features from vi or emacs. Not that I don’t also enjoy either one of those. I’m a student so perhaps I would like using different tools for much larger projects than I’m used to, but I feel like an IDE would serve me well enough for those sort of tasks if need be. I like to keep things simple, and I’ve never had that backfire. Any thoughts? submitted by /u/tempus-temporis [link] [comments]

X- ITM

Cloud Computing - Consultancy - Development - Reverse Engineering

Nested Environments - High Availability Services

 

X-ITM UK X-ITM France X-ITM Russia X-ITM USA X-ITM HK X-ITM Netherlands X-ITM Australia
Providers of Private point to point  World Wide VPN encrypted networks Providers of Private  World Wide Communications  with 16 digits dial codes Providers of World Wide Cloud Services Hosted on Underground Facilities Providers of  Support and Consultancy Services to Infrastructures and Installations
Please contact us for other services or options,

*X-ITM is entitled to terminate any user in breach our terms and  services

 

X- ITM



Live Updates Services Contact Us


X- ITM