How Alan AI works¶
Alan AI allows you to add a multimodal conversational experience to your app without overhead.
Alan AI is an end-to-end conversational AI platform to build robust and reliable AI agents and chatbots. The Alan AI backend takes care of most of the heavy lifting, including creating spoken language models (SLU), training speech recognition software, deploying and hosting conversational components. To create a conversational experience, you only need one developer, not a group of Machine Learning and Dev Ops specialists.
Alan AI lets you go beyond the capabilities of touch-type interfaces and add a conversational experience for any workflow or function in your app. Dialog scripts for conversational experiences are written in JavaScript, which makes them highly customizable and adaptable.
Alan AI multimodal interfaces are built once and can be deployed anywhere; they do not need to be rebuilt for specific platforms. We provide lightweight SDKs to integrate with:
Web frameworks: React, Angular, Vue, Ember, JavaScript, Electron
iOS: Swift and Objective-C
Android: Kotlin and Java
Cross-platform frameworks: Flutter, Ionic, React Native, Apache Cordova
Any updates to the conversational experience can be pushed out immediately, without the need to roll out a new app version. Because of Alan AI’s serverless environment, these changes are made available to users on the fly.