- Регистрация
- 1 Мар 2015
- Сообщения
- 2,110
- Баллы
- 155
This post will teach you to create a native iOS app using to build AI-powered chat and image features in Swift. We’ll leverage Serverless resources, including AWS Lambda and API Gateway, for the backend.
The sample application includes the following:
The final result will be the following app:
Prerequisites
Before you get started, make sure you have the following installed:
You can choose your preferred tool for deploying Lambda functions, but I’ll provide the code necessary to create them:
Text Lambda
Please take note of a few important points below:
import { BedrockRuntimeClient, InvokeModelCommand } from '@aws-sdk/client-bedrock-runtime';
const client = new BedrockRuntimeClient({ region: 'us-east-1' });
export async function handler(event: any) {
const prompt = JSON.parse(event.body).prompt;
const input = {
modelId: 'ai21.j2-mid-v1',
contentType: 'application/json',
accept: '*/*',
headers: {
'Access-Control-Allow-Headers': 'Content-Type',
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true,
'Access-Control-Allow-Methods': 'POST'
},
body: JSON.stringify({
prompt: prompt,
maxTokens: 200,
temperature: 0.7,
topP: 1,
stopSequences: [],
countPenalty: { scale: 0 },
presencePenalty: { scale: 0 },
frequencyPenalty: { scale: 0 }
})
};
try {
const data = await client.send(new InvokeModelCommand(input));
const jsonString = Buffer.from(data.body).toString('utf8');
const parsedData = JSON.parse(jsonString);
const text = parsedData.completions[0].data.text;
return text;
} catch (error) {
console.error(error);
}
}
Image Lambda
import { BedrockRuntimeClient, InvokeModelCommand } from '@aws-sdk/client-bedrock-runtime';
const client = new BedrockRuntimeClient({ region: 'us-east-1' });
export async function handler(event: any) {
const prompt = JSON.parse(event.body).prompt;
const input = {
modelId: 'amazon.titan-image-generator-v1',
contentType: 'application/json',
accept: 'application/json',
headers: {
'Access-Control-Allow-Headers': 'Content-Type',
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true,
'Access-Control-Allow-Methods': 'POST'
},
body: JSON.stringify({
textToImageParams: {
text: prompt
},
taskType: 'TEXT_IMAGE',
imageGenerationConfig: {
cfgScale: 10,
seed: 0,
width: 512,
height: 512,
numberOfImages: 1
}
})
};
try {
const command = new InvokeModelCommand(input);
const response = await client.send(command);
const blobAdapter = response.body;
const textDecoder = new TextDecoder('utf-8');
const jsonString = textDecoder.decode(blobAdapter.buffer);
try {
const parsedData = JSON.parse(jsonString);
return parsedData.images[0];
} catch (error) {
console.error('Error parsing JSON:', error);
return 'TextError';
}
} catch (error) {
console.error(error);
}
}
Now deploy your Lambdas, if you're using Serverless Framework you can use the following configuration:
service: aws-bedrock-ts
frameworkVersion: '3'
provider:
name: aws
runtime: nodejs18.x
timeout: 30
iam:
role:
statements:
- Effect: 'Allow'
Action:
- 'bedrock:InvokeModel'
Resource: '*'
functions:
bedrockText:
handler: src/bedrock/text.handler
name: 'aws-bedrock-text'
events:
- httpApi:
path: /bedrock/text
method: post
bedrockImage:
handler: src/bedrock/image.handler
name: 'aws-bedrock-image'
events:
- httpApi:
path: /bedrock/image
method: post
You will now be granted the API endpoints to your Lambdas, save those.
Developing your iOS app
Setting Up the Project
First, define a model to store your chat messages, which could either be text or images. I'll name my class ChatMessage.swift:
import UIKit
struct ChatMessage: Equatable {
var text: String?
var image: UIImage?
var isImage: Bool
var isUser: Bool
}
Service for Handling API Requests
This service is responsible for managing API interactions, including sending prompts to your Lambda functions and processing the responses. Make sure to update your endpoints to your Lambdas. I'll name my class APIService.swift:
import UIKit
class APIService: ObservableObject {
@Published var messages: [ChatMessage] = []
private func getEndpointURL(for type: String) -> URL? {
let baseURL = ""
switch type {
case "text":
return URL(string: "\(baseURL)/text")
case "image":
return URL(string: "\(baseURL)/image")
default:
return nil
}
}
func addUserPrompt(_ prompt: String) {
messages.append(ChatMessage(text: prompt, image: nil, isImage: false, isUser: true))
}
func sendRequest(prompt: String, type: String, completion: @escaping () -> Void) {
guard let url = getEndpointURL(for: type) else {
print("Invalid URL for type: \(type)")
return
}
var request = URLRequest(url: url)
request.httpMethod = "POST"
let parameters: [String: Any] = ["prompt": prompt]
request.httpBody = try? JSONSerialization.data(withJSONObject: parameters)
request.addValue("application/json", forHTTPHeaderField: "Content-Type")
URLSession.shared.dataTask(with: request) { data, response, error in
if let error = error {
print("Error: \(error)")
return
}
guard let data = data else { return }
print(data)
DispatchQueue.main.async {
if type == "text" {
if let responseString = String(data: data, encoding: .utf8) {
let trimmedResponse = responseString.trimmingCharacters(in: .whitespacesAndNewlines)
self.messages.append(ChatMessage(text: trimmedResponse, image: nil, isImage: false, isUser: false))
}
} else {
DispatchQueue.main.async {
if let base64String = String(data: data, encoding: .utf8),
let imageData = Data(base64Encoded: base64String, options: .ignoreUnknownCharacters),
let image = UIImage(data: imageData) {
DispatchQueue.main.async {
self.messages.append(ChatMessage(text: nil, image: image, isImage: true, isUser: false))
}
}
}
}
completion()
}
}.resume()
}
}
View for Chat Interface
Now, create the main view that will handle the UI and display the chat messages. I'll name my file BedrockView.swift:
import SwiftUI
struct BedrockView: View {
@StateObject var apiService = APIService()
@State private var prompt: String = ""
@State private var selectedType = 0
@State private var isLoading = false
var body: some View {
VStack {
ScrollViewReader { scrollViewProxy in
ScrollView {
VStack {
ForEach(apiService.messages.indices, id: \.self) { index in
if apiService.messages[index].isImage, let image = apiService.messages[index].image {
HStack {
Spacer()
Image(uiImage: image)
.resizable()
.scaledToFit()
.frame(height: 200)
.frame(maxWidth: .infinity, alignment: .leading)
.cornerRadius(10)
.padding(.vertical, 5)
}
} else if let text = apiService.messages[index].text {
HStack {
if apiService.messages[index].isUser {
Spacer()
Text(text)
.padding(.vertical, 6)
.padding(.horizontal, 12)
.background(Color.blue.opacity(0.2))
.cornerRadius(10)
.frame(maxWidth: .infinity, alignment: .trailing)
} else {
Text(text)
.padding(.vertical, 6)
.padding(.horizontal, 12)
.background(Color.gray.opacity(0.2))
.cornerRadius(10)
.frame(maxWidth: .infinity, alignment: .leading)
}
}
.padding(.vertical, 1)
}
}
if isLoading {
ProgressView()
.padding(.vertical, 20)
}
}
.padding(.horizontal)
.id("BOTTOM")
}
.onChange(of: apiService.messages) { _ in
withAnimation {
scrollViewProxy.scrollTo("BOTTOM", anchor: .bottom)
}
}
}
VStack {
TextField("Enter prompt...", text: $prompt)
.textFieldStyle(.roundedBorder)
.padding(.horizontal)
.padding(.vertical, 10)
HStack {
Picker(selection: $selectedType, label: Text("Type")) {
Text("Text").tag(0)
Text("Image").tag(1)
}
.pickerStyle(SegmentedPickerStyle())
.frame(maxWidth: .infinity)
.padding(.leading, 10)
Button(action: {
if prompt.isEmpty { return }
apiService.addUserPrompt(prompt)
let type = selectedType == 0 ? "text" : "image"
isLoading = true
apiService.sendRequest(prompt: prompt, type: type) {
isLoading = false
}
prompt = ""
}) {
Text("Send")
.frame(width: 100, height: 2)
.padding()
.background(Color.primary)
.foregroundColor(.white)
.cornerRadius(10)
}
}
.padding(.horizontal)
}
.padding()
}
}
}
struct ContentView_Previews: PreviewProvider {
static var previews: some View {
BedrockView()
}
}
App Entry Point
Go to your file: YourAppNameApp.swift and update the default entry point created when you set up your SwiftUI project. Mine is called BedrockView as you saw above.
import SwiftUI
@main
struct BedrockSwiftApp: App {
var body: some Scene {
WindowGroup {
BedrockView()
}
}
}
Running the App in Xcode
Now you're ready to run your app! Follow these steps to launch it in Xcode:
This will launch the app on your selected device, allowing you to interact with Amazon Bedrock's chat and image generation features.
Note a couple of things
As this is a local app for testing, I've set the Access-Control-Allow-Origin to `*. Additionally, you may need to adjust the CORS settings in API Gateway..
Note that API calls may incur a small cost. For detailed pricing information, please refer to the .
GitHub Repositories
The source code for this project is available on GitHub:
In this post, I’ve walked you through building a simple AI-chat application for iOS, using native Swift alongside serverless AWS services. By integrating Amazon Bedrock's generative AI models with services like AWS Lambda and API Gateway, we’ve created a streamlined solution that leverages the power of AWS in a native mobile experience. Please note that I’ve aimed to use only native components in the app, though there are certainly areas for improvement. Additionally, securing your API with tokens is essential; I’ll cover this topic in detail in an upcoming post.
The sample application includes the following:
- A mobile application using Swift
- An integration to Amazon Bedrock using the models amazon.titan-image-generator-v1 and ai21.j2-mid-v1
- Serverless backend processing using AWS Lambda with TypeScript
- Implementation of RESTful APIs using Amazon API Gateway for communication
- Use Amazon CloudWatch Logs to monitor AWS Lambda functions and view their logs
The final result will be the following app:
Prerequisites
Before you get started, make sure you have the following installed:
- An AWS account
- Node.js v18 or later
- Serverless Framework, AWS SAM or AWS CDK (depending on if you want to use Infrastructure as Code. I'll be using Serverless Framework)
- Package manager, I'll be using yarn
- Xcode version 15 or later
- Users access the application from their mobile devices and the app sends a request to Amazon API Gateway
- API Gateway routes the request to either the ImageFunction or TextFunction Lambda
- AWS Lambda communicates with an Amazon Bedrock model and retrieves the generated response in JSON format
- The processed response is sent back to the app for display, enabling content or chat interaction
- Login to your AWS Console
- Go to Amazon Bedrock
- In the left navigation click on Model access
- Request access – please note that the body may vary depending on the model you select. For the image I will be using the Titan Image Generator G1 model and for the text I will be using the ai21.j2-mid-v1 model
You can choose your preferred tool for deploying Lambda functions, but I’ll provide the code necessary to create them:
Text Lambda
Please take note of a few important points below:
- You need to import the client-bedrock-runtime package
- You need to add the modelId
- The prompt is the search text provided from your API
import { BedrockRuntimeClient, InvokeModelCommand } from '@aws-sdk/client-bedrock-runtime';
const client = new BedrockRuntimeClient({ region: 'us-east-1' });
export async function handler(event: any) {
const prompt = JSON.parse(event.body).prompt;
const input = {
modelId: 'ai21.j2-mid-v1',
contentType: 'application/json',
accept: '*/*',
headers: {
'Access-Control-Allow-Headers': 'Content-Type',
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true,
'Access-Control-Allow-Methods': 'POST'
},
body: JSON.stringify({
prompt: prompt,
maxTokens: 200,
temperature: 0.7,
topP: 1,
stopSequences: [],
countPenalty: { scale: 0 },
presencePenalty: { scale: 0 },
frequencyPenalty: { scale: 0 }
})
};
try {
const data = await client.send(new InvokeModelCommand(input));
const jsonString = Buffer.from(data.body).toString('utf8');
const parsedData = JSON.parse(jsonString);
const text = parsedData.completions[0].data.text;
return text;
} catch (error) {
console.error(error);
}
}
Image Lambda
import { BedrockRuntimeClient, InvokeModelCommand } from '@aws-sdk/client-bedrock-runtime';
const client = new BedrockRuntimeClient({ region: 'us-east-1' });
export async function handler(event: any) {
const prompt = JSON.parse(event.body).prompt;
const input = {
modelId: 'amazon.titan-image-generator-v1',
contentType: 'application/json',
accept: 'application/json',
headers: {
'Access-Control-Allow-Headers': 'Content-Type',
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true,
'Access-Control-Allow-Methods': 'POST'
},
body: JSON.stringify({
textToImageParams: {
text: prompt
},
taskType: 'TEXT_IMAGE',
imageGenerationConfig: {
cfgScale: 10,
seed: 0,
width: 512,
height: 512,
numberOfImages: 1
}
})
};
try {
const command = new InvokeModelCommand(input);
const response = await client.send(command);
const blobAdapter = response.body;
const textDecoder = new TextDecoder('utf-8');
const jsonString = textDecoder.decode(blobAdapter.buffer);
try {
const parsedData = JSON.parse(jsonString);
return parsedData.images[0];
} catch (error) {
console.error('Error parsing JSON:', error);
return 'TextError';
}
} catch (error) {
console.error(error);
}
}
Now deploy your Lambdas, if you're using Serverless Framework you can use the following configuration:
service: aws-bedrock-ts
frameworkVersion: '3'
provider:
name: aws
runtime: nodejs18.x
timeout: 30
iam:
role:
statements:
- Effect: 'Allow'
Action:
- 'bedrock:InvokeModel'
Resource: '*'
functions:
bedrockText:
handler: src/bedrock/text.handler
name: 'aws-bedrock-text'
events:
- httpApi:
path: /bedrock/text
method: post
bedrockImage:
handler: src/bedrock/image.handler
name: 'aws-bedrock-image'
events:
- httpApi:
path: /bedrock/image
method: post
You will now be granted the API endpoints to your Lambdas, save those.
Developing your iOS app
Setting Up the Project
- Start by creating a new Swift project in Xcode.
- Name the project according to your app (e.g., BedrockSwift).
First, define a model to store your chat messages, which could either be text or images. I'll name my class ChatMessage.swift:
import UIKit
struct ChatMessage: Equatable {
var text: String?
var image: UIImage?
var isImage: Bool
var isUser: Bool
}
Service for Handling API Requests
This service is responsible for managing API interactions, including sending prompts to your Lambda functions and processing the responses. Make sure to update your endpoints to your Lambdas. I'll name my class APIService.swift:
import UIKit
class APIService: ObservableObject {
@Published var messages: [ChatMessage] = []
private func getEndpointURL(for type: String) -> URL? {
let baseURL = ""
switch type {
case "text":
return URL(string: "\(baseURL)/text")
case "image":
return URL(string: "\(baseURL)/image")
default:
return nil
}
}
func addUserPrompt(_ prompt: String) {
messages.append(ChatMessage(text: prompt, image: nil, isImage: false, isUser: true))
}
func sendRequest(prompt: String, type: String, completion: @escaping () -> Void) {
guard let url = getEndpointURL(for: type) else {
print("Invalid URL for type: \(type)")
return
}
var request = URLRequest(url: url)
request.httpMethod = "POST"
let parameters: [String: Any] = ["prompt": prompt]
request.httpBody = try? JSONSerialization.data(withJSONObject: parameters)
request.addValue("application/json", forHTTPHeaderField: "Content-Type")
URLSession.shared.dataTask(with: request) { data, response, error in
if let error = error {
print("Error: \(error)")
return
}
guard let data = data else { return }
print(data)
DispatchQueue.main.async {
if type == "text" {
if let responseString = String(data: data, encoding: .utf8) {
let trimmedResponse = responseString.trimmingCharacters(in: .whitespacesAndNewlines)
self.messages.append(ChatMessage(text: trimmedResponse, image: nil, isImage: false, isUser: false))
}
} else {
DispatchQueue.main.async {
if let base64String = String(data: data, encoding: .utf8),
let imageData = Data(base64Encoded: base64String, options: .ignoreUnknownCharacters),
let image = UIImage(data: imageData) {
DispatchQueue.main.async {
self.messages.append(ChatMessage(text: nil, image: image, isImage: true, isUser: false))
}
}
}
}
completion()
}
}.resume()
}
}
View for Chat Interface
Now, create the main view that will handle the UI and display the chat messages. I'll name my file BedrockView.swift:
import SwiftUI
struct BedrockView: View {
@StateObject var apiService = APIService()
@State private var prompt: String = ""
@State private var selectedType = 0
@State private var isLoading = false
var body: some View {
VStack {
ScrollViewReader { scrollViewProxy in
ScrollView {
VStack {
ForEach(apiService.messages.indices, id: \.self) { index in
if apiService.messages[index].isImage, let image = apiService.messages[index].image {
HStack {
Spacer()
Image(uiImage: image)
.resizable()
.scaledToFit()
.frame(height: 200)
.frame(maxWidth: .infinity, alignment: .leading)
.cornerRadius(10)
.padding(.vertical, 5)
}
} else if let text = apiService.messages[index].text {
HStack {
if apiService.messages[index].isUser {
Spacer()
Text(text)
.padding(.vertical, 6)
.padding(.horizontal, 12)
.background(Color.blue.opacity(0.2))
.cornerRadius(10)
.frame(maxWidth: .infinity, alignment: .trailing)
} else {
Text(text)
.padding(.vertical, 6)
.padding(.horizontal, 12)
.background(Color.gray.opacity(0.2))
.cornerRadius(10)
.frame(maxWidth: .infinity, alignment: .leading)
}
}
.padding(.vertical, 1)
}
}
if isLoading {
ProgressView()
.padding(.vertical, 20)
}
}
.padding(.horizontal)
.id("BOTTOM")
}
.onChange(of: apiService.messages) { _ in
withAnimation {
scrollViewProxy.scrollTo("BOTTOM", anchor: .bottom)
}
}
}
VStack {
TextField("Enter prompt...", text: $prompt)
.textFieldStyle(.roundedBorder)
.padding(.horizontal)
.padding(.vertical, 10)
HStack {
Picker(selection: $selectedType, label: Text("Type")) {
Text("Text").tag(0)
Text("Image").tag(1)
}
.pickerStyle(SegmentedPickerStyle())
.frame(maxWidth: .infinity)
.padding(.leading, 10)
Button(action: {
if prompt.isEmpty { return }
apiService.addUserPrompt(prompt)
let type = selectedType == 0 ? "text" : "image"
isLoading = true
apiService.sendRequest(prompt: prompt, type: type) {
isLoading = false
}
prompt = ""
}) {
Text("Send")
.frame(width: 100, height: 2)
.padding()
.background(Color.primary)
.foregroundColor(.white)
.cornerRadius(10)
}
}
.padding(.horizontal)
}
.padding()
}
}
}
struct ContentView_Previews: PreviewProvider {
static var previews: some View {
BedrockView()
}
}
App Entry Point
Go to your file: YourAppNameApp.swift and update the default entry point created when you set up your SwiftUI project. Mine is called BedrockView as you saw above.
import SwiftUI
@main
struct BedrockSwiftApp: App {
var body: some Scene {
WindowGroup {
BedrockView()
}
}
}
Running the App in Xcode
Now you're ready to run your app! Follow these steps to launch it in Xcode:
- Select a Device: Choose a simulator or connected device from the toolbar.
- Build and Run: Click the "Run" button (or press Cmd + R) to build and run the app.
This will launch the app on your selected device, allowing you to interact with Amazon Bedrock's chat and image generation features.
Note a couple of things
As this is a local app for testing, I've set the Access-Control-Allow-Origin to `*. Additionally, you may need to adjust the CORS settings in API Gateway..
Note that API calls may incur a small cost. For detailed pricing information, please refer to the .
GitHub Repositories
The source code for this project is available on GitHub:
- Backend:
- iOS App:
In this post, I’ve walked you through building a simple AI-chat application for iOS, using native Swift alongside serverless AWS services. By integrating Amazon Bedrock's generative AI models with services like AWS Lambda and API Gateway, we’ve created a streamlined solution that leverages the power of AWS in a native mobile experience. Please note that I’ve aimed to use only native components in the app, though there are certainly areas for improvement. Additionally, securing your API with tokens is essential; I’ll cover this topic in detail in an upcoming post.