Hungry for Digital Transformation: The New Full Stack Partner

 

Full stack can mean different things to a lot of different people — anything from pancakes (our favorite!), to developers, to enterprise tech stacks of all kinds.

For Mark III and how we envision the channel partner of the future, we extend the concept of “full stack” into describing how we’ve steadily built our unique organization over the past few years based around full stack capabilities to help our clients be successful in the era of digital transformation.

For us, full stack capabilities comprise of our developers, data scientists, DevOps/Automation engineers, and systems architects/engineers all working together as a cohesive unit to help our clients solve challenges within the business and IT, and partner to help them open up new channels, revenue sources, and business models.

Why is a true full stack approach from partners needed now?

As industries slowly (or very quickly, in some cases) shift to open source centric, cloud-native applications and services to support initiatives like IoT, AI, Deep Learning, Advanced Analytics, DevOps, and Mobile, the way a partner can assist an enterprise in implementing and being successful with these initiatives has fundamentally shifted from being solely about architecture and implementation/operational excellence, but more to a balance between great code, creative design, an understanding of analytics/AI/ML/DL, and the solid architecture concepts and principles we’ve always known (but maybe implemented a bit differently via cloud-native, API-driven architectures).

Keep in mind that cloud-native has nothing to do with location… it’s simply the new style of “composed” application or service that is built on a foundation of mostly open source inspired frameworks and services that is mostly portable and can be deployed in the public cloud, on-premise in the datacenter, or more and more so, to the edge.

The amazing (and yet daunting thing) about the impact of open source thinking over the past 15 years across everything from software to hardware to datasets, is the number of options and the flexibility that an enterprise or organization has now to build almost anything that they need to be competitive in the market and serve their customers how they best prefer to be served.

In a world where all these options exist and the friction to implement ideas is dramatically lower, much of the value in a partner like us starts to shift more from the traditional technology building blocks (IT infrastructure/software/services/etc) we’ve always helped with, but more so to a balance of those building blocks and the ability to code, design, automate, and analyze data to bring all these pieces together.

An example might be how our developers and data scientists might engage on AI or Deep Learning initiatives to build out a Vision AI or an enhanced analytics with AI use case for a client that involves days/weeks of iterations using TensorFlow or Caffe to get the training dataset and inherent margin of AI error just right for the business, after which our team of DevOps engineers and system architects would look at actually how to architect the physical systems for the datacenter/edge (or services in the public cloud) and design a Continuous Integration/Continuous Deployment (CI/CD) pipeline to automate iterations and deployments of new training data and the AI models themselves.

Here’s an example of a “Medicine Recognition AI” concept that we built entirely ourselves via the full stack approach.  Our developers and data scientist built the platform via AI frameworks on servers with NVIDIA P100 GPUs (architected/implemented by our system engineers), aggregated the data needed for the training sets, iterated via different combinations of training data/techniques until we had an acceptable level of error (there’s always some error in every AI use case!), and then exported the AI model to an IoT edge device serving as an inference “server” that our system engineer team had set up.  For this initial build, we didn’t automate the deployment pipeline, but this could have been done in practice via our DevOps team if this was production and needed.

Another example is how we might utilize our development team to build a series of smart IoT kiosks with customer-facing chatbots using iOS that would then interface via IoT gateways and wireless networks (all of which we can assist with), and send key data back to AI and speedy analytics systems in the datacenter.  In this scenario, our entire team of developers, data scientists, DevOps, and system engineers would all play their role in making a transformational engagement vehicle, like the smart kiosk, available for tech-native end-users of that enterprise.

And yes, our team can always assist with “traditional” new architecture designs and optimizations of existing tech stacks on-premise in the datacenter (VMware SDDC, Microsoft, Oracle, SAP, etc), just like we have for the past two decades! Only now, when we do so, we do it with a focus on optionality for our enterprises and an eye toward utilizing that foundation to pivot to what the digital future looks like.

Hungry for a future of successful digital transformation?  The Full Stack never looked so good.

 

This entry was posted in AI, Big Data, Deep Learning, docker, efficiency, Full Stack, hadoop, HPC, IoT, TensorFlow, Uncategorized and tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply