Docker containers (https://www.docker.com/resources/what-container) have become an industry standard for cloud-native applications due to their smaller size and resource need compared to the VM-based solutions. Since Docker 17, it is possible to use multistage builds (https://docs.docker.com/develop/develop-images/multistage-build/) to create small scratch-based containers.
Obvious advantages of this approach are the size of the resulting container as well as a reduced surface for potential attacks due to a very limited number of components. But does using scratch also have some unexpected “side-effects”? Let’s look into this using a small Golang-based Program.
Changes always seem frightening in the first moment. What will the new bring? Will I be able to deal with it? Is it just as good as the old? New things gives you the chance to bring something even better to light – something that we often don’t take into consideration at the beginning.
Being responsible for Product Lifecycle Management (PLM) and topics of 3D visualization at fme, I would like to share an update with you on our activities in the field of Augmented Reality (AR) and Virtual Reality (VR), which recently have been on everyone’s lips considering buzzwords such as digitalization, IoT and Industry 4.0.
It is a challenging time for many traditional pharmaceutical companies. The competiveness of the market place, the looming loss of patents, ever-increasing international regulatory requirements and pressure to lower the overall cost for healthcare – they all increase the burden and force these companies to find new approaches in order to survive in the industry.
Pharmaceutical companies are today driven to adopt strategies for reducing resources and costs, circumstances that have been tangible in other manufacturing sectors for some time. The expectation from IT departments is that they should support the business challenges and deliver cost-effective solutions without compromising quality, compliance, agility or flexibility.
Cloud computing seems to fulfil the promises of solving these business challenges and life sciences firms increasingly look to it for the universal remedy. However, how well does cloud computing coexist with GxP compliance and regulated environments?
The use of public cloud platforms such as Amazon Web Services (AWS) for the implementation of OpenText Documentum-based ECM environments is often viewed critically. There is quite a list of points in favour of its use. In fact, it depends on the scope of the environment to be built, what data is processed there and how deeply AWS is to be integrated into the own network. This blog post should help to find answers to these questions and to show a first basic environment on the AWS platform.
OpenText Documentum is a full-fledged and mature server-based Document Management System which is accepted e.g. by the FDA and therefore widespread in pharmaceutical companies.
Compared with cloud computing technologies that are very strong in providing elastic (scalable) services OpenText Documentum products could be regarded as inflexible and monolithic / layered applications. Although they seem to be the exact opposite of the flexible Microservice architecture approach used for cloud native application design, there are ways to combine OpenText Documentum products with cloud computing technologies. read more
I usually write blog posts about general IT trends and about the paths and aberrations of digital transformation. I have always avoided writing articles about us, fme AG. Today, for once, I want to break that rule. Sometimes you walk out of a customer meeting and ask yourself: “What in God’s name do some so-called cloud consultants tell clients? Why are they confusing their clients more than neutrally showing them the options for the way to the cloud?
The list of ways a company can leverage its applications into the cloud is long. Thus also a term variety prevails which is confusing.
In today’s world, companies need to act quickly and remain flexible. Cloud-based platforms offer the optimal basis for this. In industrial manufacturing, the use of the cloud has been established for some time. The life sciences industry is just taking off in this area.
The term »cloud« has been a constant companion in the IT industry for almost ten years now. But everyone has different ideas about it. Some of the definitions differ greatly from each other, so that every discussion about “cloud” starts with a joint definition of terms.
Containers, here I will call them more specific Linux containers, are in short modularized software installations. Think of a container as an isolated area with a self-contained service. The container consists of all dependent software the service needs to run. Each container / service can connect to other containers / services. Because the containers are isolated to each other, they are not able to interfere with others in terms of software versions and runtime behavior. For each container you can plan separately on which Linux operating system, web server, language interpreter, etc. your service will rely on — which best fits to your needs. That means, that for example for excessive use of threading or performance needs a single service could be written in Go Lang based on Alpine, while another one uses Apache with PHP also on Alpine and a third one needs to comply with prerequisites using Tomcat with Java on CentOS. All this is possible with containers even running on the same host.
Last week I attended the Tech-Conference of Amazon Web Services – AWS re:Invent 2017 in Las Vegas. It lasted five days, a period of time that is not always easy to take off from your daily work. Following are the most important pieces of content from my perspective in 7-10 min for reading.
* 10 Seconds Management Spoiler *
Serverless, Machine Learning, the Machine Learning Camera DeepLens, Alexa for Business and Kube as a managed service are the main highlights of this year’s re:Invent. By extending and making existing and established services such as EC2, S3, Glacier or DynamoDB more flexible, AWS helps customers to map many requirements directly in the managed service and reduce the need for workaround implementations. It will be fascinating and at times frightening, what will be possible in the future due to the combination of these powerful services.