This workshop will use the Software Carpentries Programming with Python tutorial. During this workshop we will start by learning the fundamentals of Python and coding in general, such as variable assignment and working with different types of data. This will lead to a discussion on how to evaluate and visualise tabular data.
Explore MicroPython on an ESP32 microcontroller for Internet of Things applications. Learn how to use sensors and timers to collect data and deliver it to a collection service via a WiFi connection. The opening short lecture will describe the capabilities of the hardware, then we're into hands-on projects demonstrating common applications for this and similar low-cost hardware, individually or in groups. The afternoon session will feature more-advanced projects.
Did you attend last year? The program will be largely similar. You are invited to work-through the advanced programme and/or on your own project; or assist others.
This workshop will use the Software Carpentries Programming with Python tutorial. During this workshop we will start by learning the fundamentals of Python and coding in general, such as variable assignment and working with different types of data. This will lead to a discussion on how to evaluate and visualise tabular data.
Explore MicroPython on an ESP32 microcontroller for Internet of Things applications. Learn how to use sensors and timers to collect data and deliver it to a collection service via a WiFi connection. The opening short lecture will describe the capabilities of the hardware, then we're into hands-on projects demonstrating common applications for this and similar low-cost hardware, individually or in groups. The afternoon session will feature more-advanced projects.
Did you attend last year? The program will be largely similar. You are invited to work-through the advanced programme and/or on your own project; or assist others.
This workshop will use the Software Carpentries Programming with Python tutorial. During this workshop we will start by learning the fundamentals of Python and coding in general, such as variable assignment and working with different types of data. This will lead to a discussion on how to evaluate and visualise tabular data.
Explore MicroPython on an ESP32 microcontroller for Internet of Things applications. Learn how to use sensors and timers to collect data and deliver it to a collection service via a WiFi connection. The opening short lecture will describe the capabilities of the hardware, then we're into hands-on projects demonstrating common applications for this and similar low-cost hardware, individually or in groups. The afternoon session will feature more-advanced projects.
Did you attend last year? The program will be largely similar. You are invited to work-through the advanced programme and/or on your own project; or assist others.
Post workshop review/assistance session
Post workshop review/assistance session
A warm welcome to Kiwi PyCon!
I make code for a living. I also make code, and other things, for fun. I love learning new things and discovering new tools. So when I run into a tool that everyone tells me is the future of coding but I just can’t bring myself to accept it, what do I do?
Large language models are the future of coding, right? It’s what everyone’s saying. It’s what everyone’s boss is saying. We all need to get on board or get left behind.
Is this true? What makes a change like this inevitable? What makes a change like this good?
When things like this come along and I find myself disagreeing it’s always interesting to try and work out where that’s coming from. Sometimes it’s easy, sometimes it’s a whole journey that involves a lot of self-reflection and a bunch of reading about history.
So, am I being paranoid? Is this change inevitable and I just need to get over it? Is it just... a skill issue?
Join me as I try to unpack what worries me about all this, what I think LLMs can and should be used for, what we might expect given things that have happened in the past, and what this has to do with some guy called Ned Ludd.
Live service games present unique challenges: real-time multiplayer interactions, constant content updates, complex game state management, and the need for high uptime when players depend on your servers. This talk explores how Python's ecosystem can be leveraged to build robust, scalable game backends that handle these demands.
Through real examples from developing "Demon's Hand," a live multiplayer card game, we'll examine practical solutions for common game development challenges: implementing async-first architectures for handling concurrent player actions, designing flexible data models that evolve with game content, managing real-time state synchronization, and building systems that gracefully handle the unpredictable nature of player behaviour.
You'll learn how modern Python tools like HTTPX, asyncio, pyinstrument, and AWS services combine to create responsive game experiences, and discover why Python's rapid development cycle makes it ideal for the iterative nature of game development.
Although Python is an interpreted language, the main interpreter - CPython - first compiles your code to an intermediate bytecode representation which is then interpreted. To understand how it works we can look at how we could write our own language that compiles to CPython bytecode.
A practical review of various DNS management practices, ranging from manual configuration and writing your own zone files, to using provider control panels, tools like OctoDNS and DNSControl, and infrastructure-as-code approaches such as Terraform. It also covers custom scripting techniques that interact directly with DNS provider APIs. Each method is evaluated in terms of maintainability, suitability for different environments, and how it integrates with modern infrastructure management workflows. The session concludes with a look at how I use a simple Python module to create a transparent and adaptable alternative workflow.
Making code for other people to use is hard work. We have to think about bugs, performance, integration, maintainability, and much more. Making user-friendly code is just one of the many priorities competing for time. In the case of the SOFA Stats Python library, user experience is not a nice-to-have. To be successful, SOFA Stats needs to reach beyond the Python community and be usable by people with very modest technical skills. UV makes this a lot easier but we should not underestimate the barrier to non-developers. The library needs to be appealing, welcoming, and even fun. Fun is not a word commonly associated with statistical programming so this clearly requires effort and a strategy. Which is where UX makes a valuable contribution. UX has a systematic hyper-focus on the people the code is intended for, and how to provide them with the best possible experience. For this project, three persona were identified: students, educators, and code contributors. Clarity about target users is crucial to avoid the natural tendency to develop software for people like ourselves. Instead of working from our own assumptions and hunches we interview a representative range of people. Care is taken to avoid only hearing what we want to hear. The process is also iterative. We listen, respond to what we hear, and try again. This talk will explain how UX techniques were used, and what changed in the SOFA Stats library as a result. Maybe you will be inspired to use UX to improve your own project.
Testing is one of those things that we don't often talk about, probably because people don't think there is much to it. Just write a few tests to show that your code works and off you go, right?
There is so much more value in tests that is waiting to be unlocked! This talk aims to re-enlighten developers about how to write great tests, from how to maximise communication through your tests to how to harness AI to make your tests great again.
Under duress from some of my mates I recently bought a bush truck. It has a rear mounted radiator and a couple of manually switched radiator fans that need to be operated based on the temperature gauge reading. It turns out that I am only barely capable of monitoring the gauge while keeping the vehicle pointed in the right direction so I decided to automate it. Despite there being a trivially easy and robust off-the-shelf way to do this I decided to use a microcontroller running MicroPython instead, because ... why not? This talk will describe the design, construction, coding, and performance of the system using several different control strategies.
The climate crisis poses a severe threat to the natural systems that support modern civilization, disrupting essential cycles that provide freshwater, fertile soils, and stable weather patterns. These disruptions are projected to lead to widespread biodiversity loss and to upset local and global economies. To ensure that the scientific basis of these projections is transparent and credible, researchers globally are increasingly making climate data and models openly available. This openness supports informed decision-making and helps safeguard sustainable development from being compromised by short-term political or economic agendas.
Despite this progress in open science, the broader application of open source software and open data in climate and sustainability-related technologies remains limited. National governments, international organizations, academia, industry, and civil society have all played roles in both contributing to the crisis and proposing solutions. However, fragmented, proprietary approaches persist. Open source offers a powerful alternative—lowering costs, enhancing verifiability, and enabling collaboration across disciplines and sectors.
In this talk, I'll introduce OpenSustain.tech, the most comprehensive dataset of over 2,500 open source projects directly addressing the climate crisis. I'll detail the transparent methodology used to curate this collection, including human expert review across multiple fields, and talk a bit about the network of transitive dependencies among these projects, extending previous work in mapping the climate-focused open source ecosystem.
I'll talk about which projects are written in Python, and discuss which projects seem to be most relevant to the climate crisis. Finally, I'll discuss the strategic importance of open source and Python in advancing climate solutions.
Most organizational knowledge is still locked inside complex documents, making it difficult to extract and use the information effectively. Traditional tools often fail when working with real-world PDFs. Tables lose their structure, figures are separated from captions, and multi-column layouts are flattened into unreadable text. These issues create a significant barrier to using AI on real document data.
The open-source project Docling presents a new approach to document ingestion that mirrors human comprehension using open-source deep learning models in a neat Python package. The system extracts structured information through consistent APIs, preserving original document hierarchy while ensuring machine readability.
With support for over ten of the most common file formats and a consistent API, Docling enables production-ready document processing pipelines and provides seamless integration with established frameworks including LangChain and LlamaIndex, as well as multilingual support. Its MIT license and local execution model make it suitable for sensitive enterprise applications.
Ever had your app crash at the worst moment, during checkout or a key transaction internally? When that happens, users leave fast. Behind each crash is a bigger story: alerts firing, teams scrambling, and no clear root cause. Modern systems are fast, complex, and full of hidden failure points. One small change can ripple through the entire system in seconds. Observability is the cornerstone of reliable systems. It allows teams to identify and debug issues before they impact a broader group of users. Yet building an ideal observability stack is far from easy. It requires time and effort - instrumenting every app, service, and component to emit telemetry data.
This talk frames observability not as an ops task but as an engineering discipline. Built on open standards like OpenTelemetry, it avoids vendor lock-in and puts control back in the developer’s hands. You’ll learn how to instrument Python apps, build cost-effective telemetry pipelines, and export data for analysis - without falling into any compliance pitfalls. It's not just about logs, metrics, or traces - the goal is to extract clear business value from every signal and every dollar spent. By aligning observability with outcomes, we create an adaptive, efficient, and cost-aware setup. Whether you're just starting out or operating at scale, you'll see how adopting open standards can turn observability into a strategic asset instead of a liability.
This talk will explore the various ways of spawning processes in Python (subprocess, multiprocessing, concurrent.futures, fork, vfork, posix_spawn, etc.) and how those implementations inside Python have been leading to regular dead-locks and hard crashes for many years in production environments.
In meteorology and geosciences, the use of GRIB files is a common occurrence. GRIB files are binary blobs of data that contain "lines" of 2 dimensional information about the weather and forecasts.
Every meteoroligacl service deals with these files in a different way. For the New Zealand MetService, we must manipulate them (using Python, of course), to match the local systems. In this talk I'll cover what libraries are best for this process, and how they can be used to get the right data in the right files.
Python is a Good Language. It was a Good Language when I started using it. It probably was for you, too.
But Python is a very different language these days. The Python of 3.14 is noticeably not the same as the Python of 3.9, which is noticeably not the same as the Python of 3.6, and so on. And let's not mention the difference between Python 2.6 and 3.0 (oops).
Is the language you thought was a Good Language when you first used it still a Good Language? If it is, why are you using newer versions?
While new language features have (traditionally) had a high bar for integration into the language, resulting in an opinionated set of features that is generally Pretty Good, not everything is. Python has made design mistakes throughout its existence, but somewhat uniquely among languages, it's made concerted efforts to improve upon those mistakes and socialise the improved versions.
We're going to look at a history of language features in Python, we'll look at what we have now, and what unfortunateness we've left along the way.
What happens when you connect 1920s technology to that of the 2020s? This talk! It's equal parts nostalgia project, technical curiosity, and bleeding-edge research all in one. And along the way we'll all experience what we've gained - and lost - in our rush toward automated everything.
As large language models (LLMs) reshape how students engage with programming, educators face increasing challenges in maintaining academic integrity (An et al., 2025; McDonald et al., 2025). Drawing on the presenter's doctoral research, this talk explores how visual programming tasks can meaningfully resist unauthorised AI assistance, offering a robust alternative to conventional text-based exercises that are highly susceptible to automation.
Central to this approach is Thonny-py5mode -- a creative coding extension for the beginner-friendly Thonny IDE, developed as part of the broader PhD research project. It enables graphical output via py5, a Processing-inspired Python library for programming interactive graphics, animations, and applications (Schmitz, 2021).
The talk builds on findings by McDanel and Novak (2025), who observed that LLMs often struggle with assignments involving graphical output, especially when correctness hinges on appearance rather than testable outcomes. We present a series of Thonny-py5mode tasks used in undergraduate assessment and evaluate the performance of leading LLMs (GPT-4o, Claude Sonnet 4, Gemini 2.5) in replicating them. This session offers a scalable, discipline-agnostic strategy -- adaptable to other Python graphics libraries -- for promoting conceptual understanding, supporting creativity, and designing resilient assessments in the GenAI era.
An entertaining session where volunteers give lightning-style talks using slides they’ve never seen before. Expect plenty of laughs, followed by key daily announcements.
A short session covering key information about the venue, schedule, and practical details to help you make the most of the conference.
Neurodivergent brains can excel at tech, but excelling in formal employment can be more difficult! Most jobs are designed for the mythical neurotypical brain, and many are a particularly poor fit even for well-qualified neurodivergent brains. Learn about strategies that might work for different kinds of neurodivergent brains to adapt these neurotypical-centric jobs to neurodivergent brains. Knowing what adaptations can actually help and how to get those adaptations is important! This talk will give some examples and strategies to shape a job to fit a neurodivergent brain better. Joelle will also encourage those with lead, partnership, or management responsibility to consider alternatives to the ways we've always done things, to allow more people to do amazing work in tech.
This talk will draw from 25 years of professional experience in both tech and neurodivergent communities. It will go beyond the simple answers given to neurodivergent folks in the workplace ("Wear noise-cancelling headphones!") and instead talk about what a job that actually fits some neurodivergent brains might look like. How do we communicate/collaborate with coworkers? Get feedback? Use technology and scripting to assist us? I will mix tech solutions and non-tech solutions to the problems with neurotypical-centric employment. I hope to present a vision of neurodivergent employment futures.
Crow Advisory chose python for their spatial forecast analysis of the Wellington housing development response to Wellington's 2024 District Plan, which greatly increased allowable building capacity across the city. We'll talk through the tools we used, the design approach, and what we learned about doing spatial data science in python.
Python has long been praised for its simplicity and ease of use, but its performance has often been a subject of debate.
This talk will dive into the inner workings of NUMBA JIT compiler, exploring how it transforms Python code into optimized machine code at runtime, and the impact it has on improving the performance of Python applications. In addition, In this talk, attendees will learn about recent progress of accelerated computing in Python with Numba-CUDA, the internal mechanisms of Numbast to help bridge the gap between CUDA C++ and Python.
By attending this talk, participants will gain a comprehensive understanding of Python's NUMBA JIT compiler and its potential to revolutionize the performance of Python applications.
There are many ways to contribute to Python; here is mine.
It can be daunting to engage with a mature, large, and complex code base, and the associated processes, community, and culture. This talk will discuss my experience as a new CPython contributor, my approach, how it is going, things I've learned, and what I hope to do next.
The most recent developments in the world of Python packaging have not only made it easier to package projects, but even to create them. Turbopelican is one such tool which combines one of Python's most popular tools (a static-site generator) with a modern uv setup to provide the easiest ways to create a website without spending a cent. This talk will discuss how recent changes in the packaging world make this, and so much more, not only possible but easy.
Polars (pola.rs) is a new addition to the family of "DataFrame" data manipulation and analysis libraries. In the Python world it's rapidly becoming a highly performant "competitor" to the Pandas library for data science and data processing.
This talk is an introduction to the library and its usage, aimed at beginners as well as people who have done some work with Pandas, for comparison.
Join me for a talk about using python to simulate nanometers of materials. Let's discover their mechanical, electrical, optical and thermal properties - only through simulation.
As well as being the worlds favorite programming language, Python is also the most widely used language on New Zealand’s national supercomputing infrastructure, supporting a wide range of research applications.
Learn about the different ways Python is being used at scale on the REANNZ High Performance Computing cluster.
With examples of New Zealand researchers applying Python to pressing challenges, and showing how newcomers can get started, setting up an environment, submitting jobs, and scaling workflows.
Model Context Protocol (MCP) is a powerful new way to extend LLMs with real-time access to tools, APIs, and infrastructure. It enables seamless workflows like querying Grafana dashboards, triggering CI/CD jobs, or fixing issues from Sentry all without leaving your IDE. In this talk, we’ll explore how MCP works, how to build your own MCP servers, and how to compose them to automate Ops tasks and boost productivity across your stack.
But as we wire LLMs into our systems, security becomes a critical concern. Unrestricted use of MCP can open the door to various vectors of attack. We’ll cover the main areas of concern as companies start adopting MCP tools and discuss how to use them safely in production environments.
A series of rapid-fire five-minute talks where attendees share projects, ideas, and insights. Fast-paced, fun, and full of surprises.
We’ll come together one last time to reflect on highlights, thank our contributors, and close the conference on an inspiring note.