Welcome,

Elevate Your Conversations with Seamless AI Integration

Compose smarter, faster, and more engaging messages with overlay assistance, and enjoy a refined typing experience with AI-enhanced predictions and autocorrect.

Early AccessInstall App

Seamless Overlay

Integrates smoothly with your favorite apps, providing AI assistance without disrupting your workflow.

AI-Powered Suggestions

Receive intelligent suggestions and completions as you type, enhancing your messaging experience.

Smart Keyboard Experience

Enjoy AI-driven autocorrect, predictive text, and multilingual support for effortless typing.

About 3TAIOverlay & 3TAIKeyboard

3TAIOverlay and 3TAIKeyboard are designed to boost your productivity and creativity on Android devices. The overlay offers AI-powered assistance for crafting compelling messages, while the keyboard delivers smart typing through features like autocorrect and predictive text—all processed locally to ensure your privacy remains intact. Experience the future of mobile communication with our innovative, privacy-focused tools.

;

Guide to Create a Selfhosted AI with Flask API

Ensure Python 3.x is installed and use pip to install libraries like Flask, langchain-groq, langchain-openai, and flask-limiter.

Register for API keys from Groq and OpenAI, then set them in your environment or application settings.

Choose between Groq’s Llama model or OpenAI’s GPT-4 Mini model, and activate one based on your preference.

Create a Flask app to handle web requests, enable CORS for cross-origin requests, and set up rate limiting to control request frequency.

Once the Flask app is running, send POST requests to the /ai endpoint with a prompt. Format the request as JSON using an HTTP client.

The AI will process the input and return a JSON response with generated text based on the selected model.

Monitor the rate limits set in the Flask app to prevent exceeding request thresholds. The app will restrict further requests once limits are reached.

Modify prompts or adjust application settings to meet your specific needs.


# Aifunc.py File ------------------------------------------------
-
-
-
-
# Import necessary modules
from langchain_groq import ChatGroq  # For interacting with Groq AI
from langchain_openai import ChatOpenAI  # For interacting with OpenAI
import os
import random
from langfuse.callback import CallbackHandler  # For managing callbacks
from datetime import datetime  # For handling dates and times

# Set environment variables for API keys
os.environ["GROQ_API_KEY"] = "[GROQ_API_KEY]"  # Replace with your Groq API key
os.environ["OPENAI_API_KEY"] = "[OPENAI_API_KEY]"  # Replace with your OpenAI API key

# Initialize the AI model
# Uncomment the desired AI model initialization below:

# llm = ChatGroq(model="llama-3.3-70b-versatile")  # Using Groq's Llama model
# or
# llm = ChatOpenAI(model="gpt-4o-mini")  # Using OpenAI's GPT-4 Mini model

def aifunc(prompt):
    """
    Generates a response from the AI model based on the provided prompt.

    Args:
        prompt (str): The input prompt for the AI.

    Returns:
        str: The AI's response as a string.
    """
    messages = [
        (
            "system",
            "You are a helpful assistant."  # Setting the system role for the AI
        ),
        ("human", prompt),  # Adding the human's prompt
    ]

    ai_msg = llm.invoke(messages)  # Invoke the AI model with the messages
    return ai_msg.content  # Return the AI's response content
-
-
-
-
# Main.py File ------------------------------------------------
-
-
-
-
# Import required modules for Flask and rate-limiting
from aifuncFile import aifunc  # Import the AI function from the external file
from flask import Flask, request, jsonify  # Flask essentials
from flask_cors import CORS  # For handling Cross-Origin Resource Sharing
from flask_limiter import Limiter  # Rate-limiting module
from flask_limiter.util import get_remote_address  # Utility to get user's IP address

# Initialize the Flask app
app = Flask(__name__)

# Enable Cross-Origin Resource Sharing
CORS(app)

# Initialize the rate limiter with default limits
limiter = Limiter(
    get_remote_address,  # Function to retrieve the client's IP address
    app=app,
    default_limits=["200 per day", "50 per hour"],  # Default rate limits for all routes
)

@app.route('/ai', methods=["POST"])
@limiter.limit("10 per minute")  # Additional limit for this specific route
def aiaC():
    """
    Flask route to handle AI requests.

    Expects a JSON payload with a "data" field containing the prompt.
    Returns the AI's response.

    Returns:
        JSON: AI-generated response.
    """
    data = request.json  # Parse the incoming JSON data
    prompt = data.get("data")  # Extract the prompt from the "data" field

    # Generate AI response using the aifunc function
    response = aifunc(prompt)

    # Return the AI's response as a JSON object
    return jsonify({"response": response})


if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000, debug=True)