Sponsored Content
The Lounge What is on Your Mind? Cut Over to New Data Center and Upgraded OS Done. :) Post 303022893 by Neo on Sunday 9th of September 2018 01:38:21 AM
Old 09-09-2018
Quote:
Originally Posted by Aia
I would like to repeat that it is all about CI/CD ( I do not have to highlight it since I made myself clear before). Companies (customers) that do not implement CI/CD for the most part do not appreciate the evolution to the Cloud. Thanks to Cloud computing and the implementation of automation the speed of developing and delivering time for applications is faster. I enjoy engineering systems that almost do not require manual intervention from the moment that we commit code to source control.
I have enjoyed seen teams confidence raise by the nature of CI, knowing that tests are well crafted, and real to what it will show in production. That a whole piece of infrastructure is created at demand, automatically for CI and once that the fast feedback is reported, it is brought down until the next test, which it could be some few minutes later. This is not a buzzword, it is real results that benefits organizations that wants faster deployments without compromising quality assurance and I have the fortune to work doing that. There is pride in me when I know I have engineered a system that provides reproducible results and that has been committed to source control and that can be brought to life in several minutes.
In fact, with the utilization of containers now I can even provide quicker infrastructure where immutability is possible.
It is not my intention to convince anyone (I am not in that business) but I want to reintegrate my original statement.
That's all great, and well written, but it has little to do with UNIX.COM moving our legacy server over to a new data center and upgrading.

If I moved it to the cloud, I would consider that a downgrade, not an upgrade, LOL

We have been on the cloud before... it's not an upgrade for UNIX.COM and our server.

Moving UNIX.COM to "the cloud" would be a downgrade, at least based on my experience.

And in closing moving to the cloud would not provide UNIX.COM:

Quote:
Continuous Integration and Continuous Delivery and Continuous Deployment.
This I know as a fact from years of experience.
Cheers.
This User Gave Thanks to Neo For This Post:
 

5 More Discussions You Might Find Interesting

1. Virtualization and Cloud Computing

Cloud Enabling Computing for the Next Generation Data Center

Hear how the changing needs of massive scale-out computing is driving a transfomation in technology and learn how HP is supporting this new evolution of the web. More... (1 Reply)
Discussion started by: Linux Bot
1 Replies

2. HP-UX

Need to set up a HP cluster system in a data center

What are the server requirements, Software requirements, Network requirements etc, Please help me.. as 'm new 'm unable to get things done @ my end alone. Please refrain from typing subjects completely in upper case letters to get more attention, ty. (5 Replies)
Discussion started by: Sounddappan
5 Replies

3. Shell Programming and Scripting

Failure rate of a node / Data center

Hi, Please, i have a history of the state of each node in my data center. an history about the failure of my cluster (UN: node up, DN: node down). Here is some lines of the history: 08:51:36 UN 127.0.0.1 08:51:36 UN 127.0.0.2 08:51:36 UN 127.0.0.3 08:53:50 DN 127.0.0.1 ... (6 Replies)
Discussion started by: chercheur111
6 Replies

4. What is on Your Mind?

Resolved: Issue in Server Data Center

Dear All, There was a problem in the data center data, which caused the server to be unreachable for about an hour. Server logs show the server did not crash or go down. Hence, I assume there was a networking issue at the data center. Still waiting for final word on what happened. ... (4 Replies)
Discussion started by: Neo
4 Replies

5. What is on Your Mind?

OUTAGE: Data Center Problem Resolved.

There was a problem with our data center today, creating a site outage (server unreachable). That problem has been resolved. Basically, it seems to have been a socially engineered denial-of-service attack against UNIX.com; which I stopped as soon as I found out what the problem was. Total... (2 Replies)
Discussion started by: Neo
2 Replies
cpanfile-faq(3pm)					User Contributed Perl Documentation					 cpanfile-faq(3pm)

NAME
cpanfile-faq - cpanfile FAQ QUESTIONS
Does cpanfile replace Makefile.PL or Build.PL? No, it doesn't. "cpanfile" is a simpler way to declare CPAN dependencies, mainly to your application rather than CPAN distributions. In fact, most CPAN distributions do not need to switch to "cpanfile" unless they absolutely want to take advantage of some of the features (see below). This is considered a new extension for applications and installers. Why do we need yet another format? Here are some of the reasons that motivates the new cpanfile format. Not everything is a CPAN distribution First of all, it is annoying to write Makefile.PL when what you develop is not a CPAN distirbution. It gets more painful when you develop a web application that you want to deploy on a different environment using version control system (such as cloud infrastructure), because it requires you to often commit the META file or "inc/" directory (or even worse, botu) to a repository when your build script uses non-core modules such as Module::Install or File::Copy::Recursive. Many web application frameworks generate a boiler-plate "Makefile.PL" for dependency declaration and to let you install dependencies with "cpanm --installdeps .", but that doesn't always mean they are meant to be installed. Things can be often much simpler if you run the application from the checkout directory. With cpanfile, dependencies can be installed either globally or locally using supported tools such as cpanm or carton. Because "cpanfile" lists all the dependencies of your entire application and will be updated over time, it makes perfect sense to commit the file to a version control system, and push the file for a deployment. More control for the dependencies analysis One of the limitation when I tried to implement a self-contained local::lib library path feature for cpanminus was that the configuration phase runs the build file as a separate perl process, i.e. "perl Makefile.PL". This makes it so hard for the script to not accidentally load any modules installed in the local "site_perl" directory when determining the dynamic dependencies. With the recent evolution of CPAN installer ecosystem such as local::lib support, it makes things much easier if installers figure out whether dependencies are installed, instead of by build tools such as Module::Install. Familiar DSL syntax This is a new file type, but the format and syntax isn't entirely new. The metadata it can declare is exactly a subset of "Prereqs" in CPAN Meta Spec, with some conditionals such as "platform" and "perl". The syntax borrows a lot from Module::Install. Module::Install is a great way to easily declare module metadata such as name, author and dependencies. cpanfile format is simply to extract the dependencies into a separate file, which means most of the developers are familiar with the syntax. Complete CPAN Meta Spec v2 support "cpanfile" basically allows you to declare CPAN::Meta::Spec prerequisite specification using an easy Perl DSL syntax. This makes it easy to declare per-phase dependencies and newer version 2 features such as conflicts and version ranges. How can I start using "cpanfile"? First of all, most distributions on CPAN are not required to update to this format. If your application currently uses "Makefile.PL" etc. for dependency declaration because of the current toolchain implementation (e.g. "cpanm --installdeps ."), you can upgrade to "cpanfile" while keeping the build file based installation working for the backward compatibility. TBD: Support in other tools such as MakeMaker perl v5.14.2 2012-04-04 cpanfile-faq(3pm)
All times are GMT -4. The time now is 05:41 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy