Back to Blog

How We Use AI Across Our Entire Development Stack

How We Use AI Across Our Entire Development Stack

Every property in the Connecticode portfolio uses AI in some capacity — some as a core product feature, some as a development tool, some as a content production aid. After 18 months of integrating various AI systems into our workflow, here's an honest assessment of where AI delivers clear value and where we still prefer human judgment.

AI as a Development Partner

We use Claude Code for the majority of our development work. At Connecticode's scale — a small team managing six distinct codebases — having an AI assistant that can understand a full codebase, write correct code from context, and catch security issues in real-time changes the math of what's possible.

The most valuable use case isn't generating boilerplate. It's catching things we'd miss: an open redirect vulnerability in a Stripe checkout URL parameter, an XSS exposure in a Markdown renderer, a missing authorization check on a Server Action. Security review is tedious and easy to rush. Having AI perform an initial pass and flag suspicious patterns before human review has eliminated several real bugs that would have reached production.

We also use AI for architecture discussions. Before writing significant new functionality, talking through the approach with an AI that has full context of the codebase often surfaces tradeoffs we hadn't considered. It's not a replacement for engineering judgment — it's a fast, tireless sounding board.

AI-Generated Content at Scale

Three of our five content properties use AI for bulk content generation: RVMapper (183 blog articles), Hooked Fisherman (418 guides + 243 reviews), and Oil Outpost (104 articles). The process is consistent across all three:

  1. Keyword research first. We identify the topics, not the AI. Long-tail keywords with genuine search intent and low competition.
  2. Structured prompt with context. Each generation prompt includes the site's brand voice, target audience, geographic focus, and formatting requirements.
  3. Human editorial review. Every article is reviewed for factual accuracy and practical value before publishing. AI hallucinations are a real problem with technical or location-specific content — an article about Connecticut fishing access points can't have wrong location details.
  4. Final formatting pass. Ensuring consistent structure, internal linking, and call-to-action placement.

The generation step is fast. The review step is slower — but it's faster than writing from scratch, and it maintains editorial quality standards.

AI as a Product Feature

RVMapper's core product is AI trip planning. Users describe their travel goals and the AI generates a personalized multi-stop itinerary with campground recommendations, driving routes, and travel tips. The AI here is Claude Sonnet, called via the Anthropic API with a rich system prompt that includes campground categories, regional knowledge, and itinerary formatting requirements.

Point Strategist uses AI differently — it evaluates a user's loyalty point balances and recommends the highest-CPP redemptions based on current valuations and the user's stated travel goals. The AI component analyzes the optimization problem (which combination of programs and transfer paths maximizes the user's stated objective) rather than generating creative content.

Both use cases benefit from the same thing: AI that can reason about complex, context-dependent problems in natural language. Static logic can't replicate this flexibility.

AI for Social Media

The Connecticode social automation system uses AI to generate captions, image prompts, and post variations for all five sites' social media channels. The generation happens in a daily cron job: new content is drafted based on upcoming blog articles or seasonal themes, quality-checked against brand guidelines, and queued for Buffer distribution.

The quality bar for social content is lower than for editorial content — a slightly off caption is annoying, not damaging. This makes it a good fit for automated generation with minimal human review.

Where We Don't Use AI

Security decisions. AI can flag security issues, but the decision about whether a risk is acceptable, how to remediate it, and whether to add compensating controls requires human judgment about business context and risk tolerance.

Product direction. Which features to build, which markets to enter, how to price a SaaS subscription — these are strategic decisions that require understanding of market dynamics, competitive positioning, and customer psychology. AI can provide research and analysis, but the decision is ours.

Customer support. We don't use AI for customer-facing support interactions on any of our platforms. In the current quality tier, AI support responses are often frustrating for users who have edge-case problems. We'd rather respond more slowly with a better answer.

Financial operations. Payment flows, subscription management, and revenue reporting use standard Stripe infrastructure and human review. No AI in the money path.

The Meta-Lesson

AI is best deployed where the cost of an error is recoverable and the volume of similar tasks is high. Content generation, code review suggestions, social captions — all of these fit that profile. The upside (dramatically faster production) outweighs the downside (occasional error that needs correction).

For decisions that are hard to reverse or high-stakes, AI augments rather than replaces human judgment. The skill is knowing which category any given decision falls into.