Skip to content

2023-12-08:, tldraw make real, AWS Q, Google Gemini, GitLab Duo Chat, CI/CD Observability, KubeCon NA recordings

Thanks for reading the web version, you can subscribe to the Ops In Dev newsletter to receive it in your mail inbox.

πŸ‘‹ Hey, lovely to see you again

This month focuses on learning AI/ML and LLMs and summarizing the many announcements around generative AI, Observability, and useful resources for more efficiency. Maybe you'll get inspired to try them in the quieter time of the year or -- enjoy time off to rest.

🌱 The Inner Dev learning ...

Cloud-native and programming language history documentaries on the rise: Watch and learn about Ruby on Rails.

πŸ€– The Inner Dev learning AI/ML

After KubeCon NA 2023, I started playing with on my Macbook and asked the prompt a few questions, following the Code LLama tutorial and examples. Ollama was even able to pitch an AI talk for KubeCon EU 2024 (no worries fellow CFP reviewers, I wrote my own abstract beforehand). Probably, this was the missing bit for me to get addicted to more AI learning, in an offline environment that I can break and debug. ChatGPT and other SaaS options are nice, but there can be a blocker with understanding what happens behind APIs and chat prompts. I love the open-source transparency approach here -- and others, too, started their projects: A Copilot alternative using Ollama. You can follow my first steps with and Langchain in Python in this project -- I asked to explain an IPv6 regex.

VS Code with Python code for Langchain with Ollama, showing GitLab Duo Code Suggestions

Microsoft released 12 lessons to start with generative AI as a free learning resource. The lessons explain the concepts of generative AI (genAI), and Large Language Models (LLMs), provide a comparison of LLMs, use genAI responsibly, understand prompt engineering, build text generation applications, chat apps, search apps using vector databases, image generation apps, low code AI apps, integrating external apps with function calls, and designing UX for AI apps. Another great introduction to LLMs for busy folks can be found in this 1-hour talk.

At the Cloud-Native Saar meetup, I was invited to talk about "Efficient DevSecOps workflows with a little help from AI" (slides). The folks from EmpathyOps hosted a Twitter/X space about "GitLab Diaries: Driving DevSecOps efficiency through AI and culture" (recording), which turned into a fun ask-me-anything learning session.

Use cases:

  • tldraw showed how to use AI to draw a UI and make it real (tweet, source code). You can learn more about how an idea came to life inspired by the OpenAI dev day.
  • excalidraw is a similar drawing tool, and they shared a new text-to-diagram feature using AI. Dan Lorenc implemented it with the OCI distribution spec workflow. Impressive -- now do that for Mermaid charts (GitLab Duo can do it).
  • llamafile lets you distribute and run LLMs with a single file. (announcement blog post). Special applause for improving the developer experience: "Our goal is to make open source large language models much more accessible to both developers and end users."
  • GPTCache is a library for creating semantic cache for LLM queries.
  • Run LLama on a microcontroller (tweet, source code), noting that the 15M model runs at ~2.5 tokens/s.

News feed:

  • Google introduced Gemini, their largest and most capable AI model.
  • AWS announced Q, their AI-powered chat assistant, and also released Guardrails for Amazon Bedrock in preview.
  • Microsoft announced Phi 2 as the latest small language model (SLM), released as Open Source and available for preview in the Azure AI catalog, amongst more model updates. This model table sheet helps putting all models into contrast. One goal for smaller language models is to finetune them for cloud-native and edge use cases.

🐝 The Inner Dev learning eBPF

Here are a few highlights and quick bites from this month:

πŸ‘οΈ Observability

We kicked off the preliminary tasks for creating a new CI/CD Observability WG in the OpenTelemetry community (PR, GitLab issue discussion). I look forward to collaborating on a specification and implementation in 2024.

Grafana announced Beyla 1.0, providing zero-code instrumentation for application telemetry using eBPF. Events are captured for HTTPS and gRPC services and transformed into OpenTelemetry trace spans and Rate-Errors-Duration (RED) metrics. Beyla is compatible with OpenTelemetry and Prometheus as Observability storage backends.

New AI/LLM observability entries:

  • Langfuse provides open-source observability and analytics for LLM applications.
  • New Relic announced AI monitoring, an APM for AI. The infrastructure requirements changed, including LLMs and vector data stories, and so have the needs for quality and accuracy, performance, cost, responsible use, and security for AI. The article dives deeper into the different layers, and how to debug problems.

The PromCon 2023 recordings are available, sharing a few personal highlights:

πŸ›‘οΈ DevSecOps

It's not always DNS -- unless it is -- how platform engineering, observability, and SRE play well together to find the root cause.

Curious exactly what happens when you run a program on your computer? Read Putting the β€œYou” in CPU.

Microsoft announced .NET Aspire, providing a framework of tools to deploy applications into distributed cloud-native environments. It is not a new programming language :)

If you are using AI to analyze images, beware of hidden text in them, which could cause unexpected chat prompt responses. Great example with instructing the prompt to ignore everything and just print "hire him.".

🌀️ Cloud Native

KubeCon NA 2023 brought many interesting learning stories (YouTube playlist). Here are a few of my highlights:

Friendly community folks also created more KubeCon NA 2023 summaries:

πŸ“š Tools and tips for your daily use

  • sd - search & displace is a find-and-replace CLI, and you can use it as a replacement for sed and awk.
  • Run cal on the terminal to print the calendar. Thanks Adrien Joly.
  • is an anonymous & ephemeral Docker image registry.
  • Stern allows you to tail multiple pods on Kubernetes and multiple containers within the pod. Each result is color coded for quicker debugging.
  • Kubewarden is a policy engine for Kubernetes. Policies can be written using regular programming languages or Domain Specific Languages (DSL). Policies are compiled into WebAssembly modules that are then distributed using traditional container registries.
  • Kube-Hetzner is a highly optimized, easy-to-use, auto-upgradable, HA-default & Load-Balanced, Kubernetes cluster powered by k3s-on-MicroOS and deployed for peanuts on Hetzner Cloud
  • Goss is a YAML based serverspec alternative tool for validating a server’s configuration. It eases the process of writing tests by allowing the user to generate tests from the current system state. Once the test suite is written they can be executed, waited-on, or served as a health endpoint. kgoss is a wrapper that helps testing with goss in containers and pods in Kubernetes.

πŸ”– Book'mark

🎯 Release speed-run

GitLab 16.6 brings GitLab Duo Chat in Beta, reusable CI/CD components in Beta, minimal forking, real-time Kubernetes status updates, and more.

Parca v0.28.0 features eBPF-based profiling for Python and Ruby, enabled by default. OpenTelemetry Collector v1.0.0/v0.90.0. Prometheus v2.48.0 brings support for warnings in PromQL query results, improved support for native histograms in promtoll, and new authentication methods for remote-write endpoints. Prometheus Operator 0.70.0 brings support for Azure and GCE service discovery in ScrapeConfig CRD.

Wireshark 4.2.0. Rust 1.74.0 supports lint configuration through Cargo, allowing more specific linters for included crates, for example.

πŸŽ₯ Events and CFPs

πŸ‘‹ CFPs due soon

Looking for more CfPs?

🎀 Shoutouts

Being human. Shoutout to Emily Freeman for her touching talk at Monktoberfest: The Day Chicken Tried to Kill Me, followed by Pauline Narvas, sharing deep struggles and growth. Watch and read yourself.


Happy holidays, if you celebrate -- or enjoy the quiet time to relax :-) Read you next year.

Thanks for reading! If you are viewing the website archive, make sure to subscribe to stay in the loop! See you next month πŸ€—

Cheers, Michael

PS: If you want to share items for the next newsletter, just reply to this newsletter, send a merge request, or let me know through LinkedIn, Twitter/X, Mastodon, Blue Sky. Thanks!