Skip to main content

Credo AI FAQ

The following section contains some common questions regarding the install of the Credo AI platform.

Credo AI Cluster Installs FAQs

SysAdmin

How do I get the URL and Password for my instance of the platform?

Credo AI is configured to use single sign-on (SSO) and the users are authenticated through the configured identity provider. There is not a separate username password maintained in the Credo AI application.

What kind of privileges/hardware does the SysAdmin installing the Credo AI platform need?

The SysAdmin will need:

  • workstation running Linux or MacOS
  • sudo/root access on the workstation
  • kubernetes cluster api server connection
  • kubernetes RBAC of cluster admin or namespaced-scoped admin
note

The KOTS CLI installer does not support Windows.

tip

We recommend using a workstation with internet access to download the installer, plugins and Credo AI policy assets.

How can I download the artifacts required for installing the Credo AI platform?

All artifacts required for installing the Credo AI platform are available on a customer-specific, password-protected download portal, which Credo AI can give you access to. It is recommended that the portal is accessible from within your corporate network, since the artifacts are several GBs in size.

Does my kubernetes cluster need access to the internet?

The Credo AI platform can be installed without Internet access, once all the required files are downloaded and available locally. However, we recommend access to the Internet from the Kubernetes cluster for an efficient install and upgrade experience. See the online environment requirements for more details.

My kotsadmin UI is not responding.

The kotsadmin proxy deployment may need to be restarted kubectl rollout restart deployment/kurl-proxy-kotsadm.

Cloud Providers

Common questions we have found around the providers.

Which Kubernetes distributions that Credo AI supports out-of-the-box?

Credo AI is designed to run on any distribution of Kubernetes, however certain commercial distributions are untested. The Credo AI application is validated/tested on:

  • Amazon AWS EKS

Which cloud providers has Credo AI been tested on?

The Credo AI platform has been tested on the following cloud providers:

  • Amazon AWS

What container runtimes does Credo AI support?

Containers in the Credo AI Platform run on the following container runtimes:

  • containerd

Kubernetes Server

Common Kubernetes questions we have found.

What version(s) of Kubernetes Server does Credo AI support?

Use a version of KOTS that is compatible with the appropriate version of Kubernetes.

KOTS VersionsKubernetes Compatibility
v1.105.2 and laterv1.28, v1.27, v1.26
v1.100.0 to v1.102.0v1.27, v1.26
v1.93.1 to v1.99.0v1.26

Refer to the Kubernetes / KOTS Compatibility reference to confirm the latest versions.

Can the Credo AI platform be scoped to a dedicated namespace?

The Credo AI platform can be deployed and scoped to a specific namespace.

What are the minimum resource requirements for operating the Credo AI Platform?

Optimal performance of the Credo AI platform is ensured on a 3-node cluster.

Is there a default StorageClass defined on the Kubernetes cluster?

Credo AI applications do not use persistent volume claims and as such StorageClass is not applicable.

What Ingress Controller are you planning to use to access the Credo AI platform? Is it already installed?

Credo AI recommends the cloud managed AWS ALB ingress controller for EKS when running in an existing cluster.

Are there any SecurityContext requirements on the Kubernetes cluster?

The Credo AI platform is designed to leverage as few permissions as deemed necessary for optimal functioning. No container is run as root. All processes are owned by non-system users. Please reach out to Credo AI if you have specific SecurityContext requirements.

Does Credo AI support running in SELinux environments?

The Credo AI platform requires SELinux to be running in permissive mode, if enabled.

Are there any Network Policies configured on the Kubernetes cluster?

Pods of the Credo AI platform will need to communicate with each other. By default, pods can communicate with each other. Please reach out to Credo AI if you have custom Network Policies configured on the Kubernetes cluster.

How many IP addresses should be available for the Credo AI Platform?

The Credo AI platform is architected to be scalable, using resources on-demand. Given the dynamic nature of the infrastructure involved, we recommend at least 128 IP address CIDR blocks attached to the relevant subnets. This number may increase as more features (or microservices) are released to the platform.

Are there any namespace-level constraints enforced on the Kubernetes cluster?

Please let Credo AI know if there are any constraints configured at the namespace-level on the Kubernetes cluster, as this will help prepare for a smooth installation experience.

Are there any cluster-level constraints enforced on the Kubernetes cluster?

Please let Credo AI know if there are any specific cluster-level constraints configured on the Kubernetes cluster, as this will help prepare for a smooth installation experience.

Does the Kubernetes cluster have access to a private/public container registry?

The Kubernetes cluster on which the Credo AI platform will be installed must have connectivity to a container registry. The SysAdmin performing the installation must also have Read/Write access to the same container registry.

For air-gapped installations, the kots installer supports hosting of application images in an in-cluster registry by default but can be configured to push images to a remote registry.

Object Storage

I'm not able to upload evidence.

If you see errors like this in the browser

app

check the developer panel to see detailed errors.

If errors like this show up,

Failed to load resource: the server responded with a status of 403 (Forbidden)

the Credo AI app may not be using the correct authentication or SSE headers in calls to the bucket. Check your authentication and SSE header settings match your bucket configuration.

If errors like this show up,

Access to XMLHttpRequest at 'https://s3.us-west-2.amazonaws.com/example-bucket...' from origin 'https://credoai.example.com' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.

ensure the S3 buckets have a CORS policy attached that allows requests from your instance's hostname.

For example, if hosting Credo AI at credoai.example.com:

[
{
"ID": "protect_bucket",
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"POST",
"GET",
"HEAD",
"DELETE",
"PUT"
],
"AllowedOrigins": [
"credoai.example.com"
],
"ExposeHeaders": [],
"MaxAgeSeconds": 0
}
]

Other Considerations

Does your enterprise have a software procurement process?

Please keep Credo AI up to date on any software procurement process that may be in place before installing new software, and potential turnaround times for such processes.

Do you want to deploy Credo AI on infrastructure that isn't mentioned above (e.g.: Cloud Providers, Kubernetes Distributions, etc.)?

Inform Credo AI as soon as possible so we can set up an architecture review between your platform team and Credo AI's platform team.

Can any of the Credo AI platform components be externalized, so it’s not managed by Credo AI?

The platform supports the use of AWS S3 as well as most S3-compatible systems as the Object/Blob store. The platform’s dependency on a relational database requires an externalized postgres database.

A managed service for S3 and/or Postgres is recommended for production-grade installs.

Can the Credo AI platform be deployed on a Kubernetes cluster that is shared with other applications?

The Credo AI platform has been architected to be highly scalable and reliable. Based on usage (number of models) and load (data ingestion), pods are scaled in short periods of time to ensure efficient operation. As such, if other applications will be installed on the same Kubernetes platform, talk to Credo AI about provisioning dedicated nodegroups for the cluster.

Does the Credo AI platform support different organizations/business units using the same application?

Multiple organizations and business units are not supported at this time.