"Shadow IT" is going to exist with or without AI. As you mentioned, Excel and Access have been used for this for a long time. It's encouraged with platforms like SharePoint and AirTable.
As someone who was very much living the shadow IT life, I navigated through most of these tools/platforms. I attempted to maintain documentation and pass it along before leaving roles, but I know most of the applications I've worked on are in the graveyard now.
One thing that IT has to accept is that there is a need for short-term applications, or proof of concepts spun up by the business. The bad part of this is that the business is unlikely to speak up when they grow into something long-term or high volume.
I can rant about this for days, but my point is that this isn't an AI problem, it's a business oriented problem. I don't have an immediate solution for it, other than suggesting IT provide supported avenues for this type of development effort, with a strong push or even requirement for best practices and documentation.
For sure, but LLMs do make it a lot easier for someone who doesn’t really know what they’re doing to get to something large, complicated, and deployable in place.
It’s a business problem for sure, but that business problem becomes a lot larger and higher impact when there are an order of magnitude more more people taking part in it, and they can make things an order of magnitude more complex and harder to verify.
Kinda the same way we’ve gone from a world where photoshop and general VFX software in skilled hands could produce very convincing results but most images/videos still fell into the “probably legit” bucket, to a world where anyone can fake any image at zero effort and you really can’t trust anything you see, actually.
For sure, but LLMs do make it a lot easier for someone who doesn’t really know what they’re doing to get to something large, complicated, and deployable in place.
LLMs also make it a lot easier for people who know exactly what they want to build a bespoke tool to help with their specific workflow.
Things that would've required a full-ass project team with allocated time and cycle planning and product owners etc can be done by just the one person who is actually doing the work.
Source: One of these is in "production" internally at our company and is saving copywriters ~50% of their time weekly by automating all of the tedious shit to a custom front-end one writer built themselves. And a second tool is in the works that does a similar reduction for a section of marketing people.
That is, in essence, what Excel and Access did too.
And when scoped specifically like that, a single user's productivity tool, they are just fine. I have a ton of sql scripts that generate PL/SQL scripts to do various things. When I leave, the only real business impact is that somebody will need to do that job their way.
It's when it starts becoming a shared tool. A thing that is meant to outlive any given employee's tenure. One where it starts becoming trivially easy to accidentally (or not) bypass audits and security measures. Especially when it's done without IT knowing, which means the existence is rarely discovered until it's a "Oh god we can't account for a million dollars" crisis.
Oh yeah there are absolutely meaningful upsides too! I see it as a mixed bag, like a lot of situations where the barrier to entry drops significantly and quickly - eternal September kind of situations - where you do end up with the ratio of good <whatever> to bad <whatever> going down because a lot of people who don't have the skills, or at least don't have them yet, start giving it a go where wouldn't have even been able to try before. But on the other hand, the absolute volume has gone up so much and so many people who do have the skills are now able to participate that even the new smaller percentage of good results is a much bigger absolute number of good results, which is great!
...and then on a third hand you've got to deal with the absolute volume being so high that the if good results are up on a lower percentage, the bad results are now an overwhelming flood that needs a radical shift in thinking to properly deal with. Which even then isn't necessarily a bad thing, it can force better processes and tooling to be put in place, generally push whatever field to mature a bit, but it tends to be short term dangerous at best, and sometimes long term too.
Yep, AI has been a force multiplier for my experienced devs, and the exact opposite for the juniors. I'm not sure if it evens out lol. And we're just a small team. I can't imagine managing 15+ junior devs and doing code reviews on all of that AI-written code.
I've both been on both sides of the ol' "Shadow IT" business. I absolutely agree this is headed that way, but worse. I was one of the people that got stung repeatedly by some hidden little Access/Excel/LAMP-running-on-a-workstation becoming "business critical" and then either needing to support it or create a new real application because the hidden one ran into hard limits.
With the old ones, I usually would think "I wish they had come to us to at least ask about it so I could steer them in the right direction", but with the vibe coded stuff the creators tend to think they have all the answeres because the LLM told them what a smart lil' guy they were and vomited up some house of cards. My CEO has been busy doing this a bunch lately. I had to be the bad guy and do a scathing deep dive on his last project so that he stopped thinking he, by himself, was going to take on the world with his new AI buddies.
I have a feeling that everyone likes using AI tools to try doing someone else’s profession. They’re much less keen when someone else uses it for their profession
I don't like AI tools in general and do not use them, but the quote is accurate that even those that do use them tend to change their tune in that situation.
At least with Claude you can enforce Skills to the "company marketplace", with those you can at least give the vibe coder some guidelines on what good code looks like.
I imagine the capabilities of agents in 5 years will be able to detangle most of these vibe coded rats nests. So in a sense, yes another y2k — not a big deal?
Relevant story: I recently used Claude to convert an ancient spreadsheet provided by a regulatory agency into commented and tested python.
I always wonder how much of that "not a big deal" was a success in update happening. Obviously the planes falling from the sky thing was overblown, along with the Y2K compliance stickers on everything that took batteries, but a lot of people spent a lot of time making updates. I guess we might find out in 2038.
I mean, I spent 13 years working for an old institution that had imported data all the way back to 1900.
It would have been an absolute hot mess for another 15 years if they didn't put in the 3 years of effort then. Plus that made the effort even easier once everyone realized UTC and time zone offsets was a much better choice.
It's mostly fairly solved at this point, although not all languages and ecosystem have all of the correct components, and old data and systems need to be supported which makes things harder. But generally:
Use dates if you just need dates, use times if you just need times. These things don't need timezones, do you're sorted here.
If the thing you're working with is a fixed point in time, typically an event that has happened in the past that can't ever change, store it in UTC, and consistently use UTC throughout the app. Then at the last moment, render it in the user's timezone or whichever timezone is relevant.
If the user is configuring an event that will happen in the future, save the configuration and a materialised UTC timestamp. The timestamp is for you internally, and should be re-creates if the configuration ever changes, plus every time the tzdb gets updated. The configuration is the source of truth. The configuration is all of the options the use could input: date, time, timezone, duration (start+duration is more likely to stay accurate than start+end). Default to the user's local timezone, allow them to configure it if needed.
If you need to do arithmetic with datetimes, figure out whether the user expects an answer in absolute time or in wall-clock time. Typically, consider what should happen during a DST transition: if I have a span of one day during the transition, should that span last 24hrs exactly, or should it last 23hrs or 25hrs depending on the transition? Different applications will have different answers here, there's no single correct solution.
If it's always 24hrs exactly, you want UTC everywhere again, this is the easy case.
If it depends on the transition, then you want wall-clock times, i.e. "what time would a calendar+clock hanging on the wall show?". This is probably the hardest case to handle, but there's lots of good support for it. Now consider:
Do I need to know what the actual timezone is? If not (typically the case if you're confident you'll never need to deal with times that happen during a transition, and there's only one timezone involved in each set of calculations), you might be able use the local/naive datetime, i.e. the one where no timezones are attached.
Otherwise, store the timezone alongside each date. The ISO date standard conveniently has a format for this. Importantly, you should sure the timezone, not (just) the offset!
Either way, make sure you're always using the correct units and never converting unnecessarily. E.g. if you're using days, never convert 1 day = 24 hours. Use a duration type that can distinguish these two units.
If different users can have different timezones, try and get the timezone from an external source of truth, e.g. the operating system or the browser. If that isn't possible or the user should be able to configure their timezone, ask the user for their location not their offset (e.g. show "Europe/Berlin" as an option, not "+01"). Although for convenience, it's probably good to show what the offset currently is in each location, and allow them to search by an offset abbreviation (e.g. "CET" should show a list of timezones that use +01 in the winter).
For JavaScript, always use Temporal (via polyfill if necessary) to manage dates and times. Other languages will have their own library. The minimum necessary setup is that all of the above cases are represented by their own types - you should always have separate date, time, and datetime classes, the datetime class should be broken up into "UTC", "local", "named timezone" classes, etc.
With all of the above, datetimes are mostly manageable, but the challenge will always be other applications or data sources that don't play by the rules properly and how to handle those.
My challenge is that I language hop quite a bit. That means every time I go back to a language I haven't used in over a year I need to research the correct date and/or time library to use at this point in tone and how to use it. Without that research I might end up with an outdated library that for example doesn't take into account the most recent daylight savings information.
I've never even heard of Temporal, last time I needed it I used Moment. I think.
I mean there were definitely issues. I worked personally on Y2K remediation efforts for a large electricity provider at the time. But, when you look at the global response, many countries just about ignored the issue, and there was very little fallout. This is a little perspective from back in the day:
He said that the lack of money available to deal with the Y2K problem, coupled with the dilapidated state of computer systems in Russia - many of which used pirated software and cloned IBM hardware from the Seventies - meant that updating the technological infrastructure was impossible. Instead the Russian authorities had to adopt a fire-fighting approach, minimising the threat of a disaster.
Staff maintaining crucial IT systems were taught to ignore glitches occurring around 31 December, while many computer systems were simply 'clocked' by making them believe the date was 1970 rather than 1999. They could be overhauled later.
Vivek Wadhwa, chief executive of US-based Relativity Technologies, who advised Terekhov, said: 'The problem in the West was blown out of all proportion. A lot of people made a lot of money.'
I'd be wary of this. Much better to have code with almost no comments and then have an LLM explain bits as needed. Generally I am anti-commenting as it's an attempt to add specs to the code that aren't statically (or dynamically) validated in any way. Comment only as a last resort.
I genuinely can't tell if you're joking. Comments that only re-explain what is happening are kinda useful, but the real place for comments is in providing context and the "why". An agent can't do that because it is, by definition, not otherwise present in the code.
Not joking. The need to explain "why" is, in my experience, rare. Maybe a few things per file should have comments. Generally when I find myself commenting something it's because the code is inherently confusing and should be rewritten.
I comment so I don't have to pull out the 100 page word document that was specced out for the initial creation every time I'm debugging.
No code will ever be able to capture the 100 page spec and vice versa. The comments are there to act as the glue. I thank every single coder that cites the page of the spec that function was written for.
The business "why" for a bit of code is frequently impossible to semantically embed. I'm fully aware of domain driven design, but sometimes code is just doing code things to accomplish a business use case in a stupid way (for reasons outside of any engineer's control). Knowing what the code literally does rarely helps me solve bugs in my job because it doesn't contain references to the regulatory reasons the API exists.
I used to take this approach, until I worked with a colleague who comments way too much and realised that actually "too much" was a way better failure more than "too little" when trying to figure out what was going on with code I hadn't touched in a year. Like, I'm still fairly conservative with my comments, but it's definitely not a "last resort" sort of thing.
I'm still trying to articulate when comments make sense to me, but I wrote about one part of this on my blog a while back, and the gist was that if I need to communicate something to you that isn't obvious from the code alone, I've typically got two options: naming (i.e. variables, functions, types) or comments. But names are really hard to get right. You don't want to have too long a name (it gets unwieldy fast), but if it's too simple or generic a name, the reader may not understand the full meaning or purpose of that variable or function. It's also harder to sum up complex ideas in a good name - sometimes there really is exactly the right name for something, but often there isn't, or you can't think of it at least, so you end up with a name that's not wrong, but it's also not quite clear by itself.
On the other hand, the benefit of a comment is that you can explain all your reasoning with no limit but your patience typing and your reader's patience reading. You can definitely have comments that are too long, but it's harder to achieve that than it is to find a bad name. You can also explain why an algorithm was chosen (usually hard to capture in a name) or summarise what an algorithm does in words rather than code (usually clearer as prose than as a name).
The disadvantage is that you don't necessarily have the comment at every call-site or reference site, although tooling can mitigate this somewhat. But the advantage is that you can be much more precise and communicate much more clearly in a comment than you ever could in a name. And while it's often even cheaper expressing your idea directly in code, sometimes that code isn't obvious or misses some nuance or something like that.
So I'd suggest the better approach is almost the opposite to yours: comment first, then figure out if there's a simpler way of expressing the comment (a concise but clear name, a better way of writing the code, etc).
I think a lot of the ideas about code commenting were formed a few decades ago when you had to write all the code yourself to do even simple things. In the last 10 or 20 years, it seems like a lot of programming is gluing together someone else’s library functions. Since library functions are usually pretty well named, a lot of the code is self documenting as long as the developer carefully chooses function and variable names.
Also modern programming languages generally promote structure, and linting tools will complain if you violate best practices and even whitespace. Tools like sonarqube will even tell you if functions are too long or confusing.
People are also trained to write comments when getting their degrees. It makes more sense then when you just learned what a for loop is last week and might need reminding on how every little bit works as you iterate on an assignment over the course of a week. It might also be pretty helpful to the TAs when grading, although I’ve never done that kind of work before.
I found an old wiki from the early 2000s we built for an IRC channel back in the day. The source for the backend was partially lost and the storage format was one guy's personal invention.
Shoved Claude at it with "here's the DB, that's part of the code I managed to salvage" and it managed to export everything to markdown files.
There's exactly zero chance I would've ever gotten around to doing it myself.
Not wrong, but arguably, that's always been the goal.
The whole point of tech is to, eventually, get to where the business user can solve problems with their toolset. The problem of Excel/Access and now AI has always been that if you want to do that you have to design with rights and management FIRST, not second.
It is going to be a massive nightmare (already seeing it in my friends companies) and it almost certainly will lead to yet another wave of "so uhhh we don't know how this works, someone did it years ago, pls fix" hires, but the real "goal" is still IT managing rights and access. Not also building solutions with the department.
ShroudedScribe | a day ago
"Shadow IT" is going to exist with or without AI. As you mentioned, Excel and Access have been used for this for a long time. It's encouraged with platforms like SharePoint and AirTable.
As someone who was very much living the shadow IT life, I navigated through most of these tools/platforms. I attempted to maintain documentation and pass it along before leaving roles, but I know most of the applications I've worked on are in the graveyard now.
One thing that IT has to accept is that there is a need for short-term applications, or proof of concepts spun up by the business. The bad part of this is that the business is unlikely to speak up when they grow into something long-term or high volume.
I can rant about this for days, but my point is that this isn't an AI problem, it's a business oriented problem. I don't have an immediate solution for it, other than suggesting IT provide supported avenues for this type of development effort, with a strong push or even requirement for best practices and documentation.
Greg | a day ago
For sure, but LLMs do make it a lot easier for someone who doesn’t really know what they’re doing to get to something large, complicated, and deployable in place.
It’s a business problem for sure, but that business problem becomes a lot larger and higher impact when there are an order of magnitude more more people taking part in it, and they can make things an order of magnitude more complex and harder to verify.
Kinda the same way we’ve gone from a world where photoshop and general VFX software in skilled hands could produce very convincing results but most images/videos still fell into the “probably legit” bucket, to a world where anyone can fake any image at zero effort and you really can’t trust anything you see, actually.
shrike | 18 hours ago
LLMs also make it a lot easier for people who know exactly what they want to build a bespoke tool to help with their specific workflow.
Things that would've required a full-ass project team with allocated time and cycle planning and product owners etc can be done by just the one person who is actually doing the work.
Source: One of these is in "production" internally at our company and is saving copywriters ~50% of their time weekly by automating all of the tedious shit to a custom front-end one writer built themselves. And a second tool is in the works that does a similar reduction for a section of marketing people.
[OP] vord | 10 hours ago
That is, in essence, what Excel and Access did too.
And when scoped specifically like that, a single user's productivity tool, they are just fine. I have a ton of sql scripts that generate PL/SQL scripts to do various things. When I leave, the only real business impact is that somebody will need to do that job their way.
It's when it starts becoming a shared tool. A thing that is meant to outlive any given employee's tenure. One where it starts becoming trivially easy to accidentally (or not) bypass audits and security measures. Especially when it's done without IT knowing, which means the existence is rarely discovered until it's a "Oh god we can't account for a million dollars" crisis.
shrike | 6 hours ago
Yea, my job currently is pretty much finding all of these, cataloguing them and bringing them up to proper quality :D
All without stomping on people's enthusiasm for creating small utilities for themselves and their team.
[OP] vord | 6 hours ago
'Create away, just tell us and let us audit' is a pretty reasonable threshold tbh.
Greg | 14 hours ago
Oh yeah there are absolutely meaningful upsides too! I see it as a mixed bag, like a lot of situations where the barrier to entry drops significantly and quickly - eternal September kind of situations - where you do end up with the ratio of good <whatever> to bad <whatever> going down because a lot of people who don't have the skills, or at least don't have them yet, start giving it a go where wouldn't have even been able to try before. But on the other hand, the absolute volume has gone up so much and so many people who do have the skills are now able to participate that even the new smaller percentage of good results is a much bigger absolute number of good results, which is great!
...and then on a third hand you've got to deal with the absolute volume being so high that the if good results are up on a lower percentage, the bad results are now an overwhelming flood that needs a radical shift in thinking to properly deal with. Which even then isn't necessarily a bad thing, it can force better processes and tooling to be put in place, generally push whatever field to mature a bit, but it tends to be short term dangerous at best, and sometimes long term too.
tl;dr: shit be complicated
kovboydan | 13 hours ago
Noise: In my mind on the third hand is “on the gripping hand.”
Gourd | 10 hours ago
Yep, AI has been a force multiplier for my experienced devs, and the exact opposite for the juniors. I'm not sure if it evens out lol. And we're just a small team. I can't imagine managing 15+ junior devs and doing code reviews on all of that AI-written code.
zod000 | a day ago
I've both been on both sides of the ol' "Shadow IT" business. I absolutely agree this is headed that way, but worse. I was one of the people that got stung repeatedly by some hidden little Access/Excel/LAMP-running-on-a-workstation becoming "business critical" and then either needing to support it or create a new real application because the hidden one ran into hard limits.
With the old ones, I usually would think "I wish they had come to us to at least ask about it so I could steer them in the right direction", but with the vibe coded stuff the creators tend to think they have all the answeres because the LLM told them what a smart lil' guy they were and vomited up some house of cards. My CEO has been busy doing this a bunch lately. I had to be the bad guy and do a scathing deep dive on his last project so that he stopped thinking he, by himself, was going to take on the world with his new AI buddies.
GOTO10 | a day ago
quote
zod000 | a day ago
I don't like AI tools in general and do not use them, but the quote is accurate that even those that do use them tend to change their tune in that situation.
shrike | 17 hours ago
At least with Claude you can enforce Skills to the "company marketplace", with those you can at least give the vibe coder some guidelines on what good code looks like.
unkz | a day ago
I imagine the capabilities of agents in 5 years will be able to detangle most of these vibe coded rats nests. So in a sense, yes another y2k — not a big deal?
Relevant story: I recently used Claude to convert an ancient spreadsheet provided by a regulatory agency into commented and tested python.
tanglisha | a day ago
I always wonder how much of that "not a big deal" was a success in update happening. Obviously the planes falling from the sky thing was overblown, along with the Y2K compliance stickers on everything that took batteries, but a lot of people spent a lot of time making updates. I guess we might find out in 2038.
[OP] vord | a day ago
I mean, I spent 13 years working for an old institution that had imported data all the way back to 1900.
It would have been an absolute hot mess for another 15 years if they didn't put in the 3 years of effort then. Plus that made the effort even easier once everyone realized UTC and time zone offsets was a much better choice.
tanglisha | a day ago
If only time were a solved problem.
[OP] vord | a day ago
It will only be varying degrees of bad, forever.
Johz | 22 hours ago
It's mostly fairly solved at this point, although not all languages and ecosystem have all of the correct components, and old data and systems need to be supported which makes things harder. But generally:
For JavaScript, always use Temporal (via polyfill if necessary) to manage dates and times. Other languages will have their own library. The minimum necessary setup is that all of the above cases are represented by their own types - you should always have separate date, time, and datetime classes, the datetime class should be broken up into "UTC", "local", "named timezone" classes, etc.
With all of the above, datetimes are mostly manageable, but the challenge will always be other applications or data sources that don't play by the rules properly and how to handle those.
tanglisha | 2 hours ago
My challenge is that I language hop quite a bit. That means every time I go back to a language I haven't used in over a year I need to research the correct date and/or time library to use at this point in tone and how to use it. Without that research I might end up with an outdated library that for example doesn't take into account the most recent daylight savings information.
I've never even heard of Temporal, last time I needed it I used Moment. I think.
unkz | a day ago
I mean there were definitely issues. I worked personally on Y2K remediation efforts for a large electricity provider at the time. But, when you look at the global response, many countries just about ignored the issue, and there was very little fallout. This is a little perspective from back in the day:
https://www.theguardian.com/business/2000/jan/09/y2k.observerbusiness
teaearlgraycold | a day ago
I'd be wary of this. Much better to have code with almost no comments and then have an LLM explain bits as needed. Generally I am anti-commenting as it's an attempt to add specs to the code that aren't statically (or dynamically) validated in any way. Comment only as a last resort.
F13 | a day ago
I genuinely can't tell if you're joking. Comments that only re-explain what is happening are kinda useful, but the real place for comments is in providing context and the "why". An agent can't do that because it is, by definition, not otherwise present in the code.
teaearlgraycold | a day ago
Not joking. The need to explain "why" is, in my experience, rare. Maybe a few things per file should have comments. Generally when I find myself commenting something it's because the code is inherently confusing and should be rewritten.
[OP] vord | a day ago
I comment so I don't have to pull out the 100 page word document that was specced out for the initial creation every time I'm debugging.
No code will ever be able to capture the 100 page spec and vice versa. The comments are there to act as the glue. I thank every single coder that cites the page of the spec that function was written for.
teaearlgraycold | 23 hours ago
We live in different worlds. Never have I had such a document, working for local government, startups, or FAANG.
[OP] vord | 23 hours ago
Startups/FAANG are all built on the more modern 'move fast break things' agilish mentalities. Local governments don't have the resources.
Big old companies and the military have giant sharepoint directories full of these things.
F13 | a day ago
Generally agreed, yeah, it is rare to need to explain why. But as another commenter said, too many comments is still better than too few.
Minori | 2 hours ago
The business "why" for a bit of code is frequently impossible to semantically embed. I'm fully aware of domain driven design, but sometimes code is just doing code things to accomplish a business use case in a stupid way (for reasons outside of any engineer's control). Knowing what the code literally does rarely helps me solve bugs in my job because it doesn't contain references to the regulatory reasons the API exists.
Johz | a day ago
I used to take this approach, until I worked with a colleague who comments way too much and realised that actually "too much" was a way better failure more than "too little" when trying to figure out what was going on with code I hadn't touched in a year. Like, I'm still fairly conservative with my comments, but it's definitely not a "last resort" sort of thing.
I'm still trying to articulate when comments make sense to me, but I wrote about one part of this on my blog a while back, and the gist was that if I need to communicate something to you that isn't obvious from the code alone, I've typically got two options: naming (i.e. variables, functions, types) or comments. But names are really hard to get right. You don't want to have too long a name (it gets unwieldy fast), but if it's too simple or generic a name, the reader may not understand the full meaning or purpose of that variable or function. It's also harder to sum up complex ideas in a good name - sometimes there really is exactly the right name for something, but often there isn't, or you can't think of it at least, so you end up with a name that's not wrong, but it's also not quite clear by itself.
On the other hand, the benefit of a comment is that you can explain all your reasoning with no limit but your patience typing and your reader's patience reading. You can definitely have comments that are too long, but it's harder to achieve that than it is to find a bad name. You can also explain why an algorithm was chosen (usually hard to capture in a name) or summarise what an algorithm does in words rather than code (usually clearer as prose than as a name).
The disadvantage is that you don't necessarily have the comment at every call-site or reference site, although tooling can mitigate this somewhat. But the advantage is that you can be much more precise and communicate much more clearly in a comment than you ever could in a name. And while it's often even cheaper expressing your idea directly in code, sometimes that code isn't obvious or misses some nuance or something like that.
So I'd suggest the better approach is almost the opposite to yours: comment first, then figure out if there's a simpler way of expressing the comment (a concise but clear name, a better way of writing the code, etc).
hobbes64 | a day ago
I think a lot of the ideas about code commenting were formed a few decades ago when you had to write all the code yourself to do even simple things. In the last 10 or 20 years, it seems like a lot of programming is gluing together someone else’s library functions. Since library functions are usually pretty well named, a lot of the code is self documenting as long as the developer carefully chooses function and variable names.
Also modern programming languages generally promote structure, and linting tools will complain if you violate best practices and even whitespace. Tools like sonarqube will even tell you if functions are too long or confusing.
teaearlgraycold | a day ago
People are also trained to write comments when getting their degrees. It makes more sense then when you just learned what a for loop is last week and might need reminding on how every little bit works as you iterate on an assignment over the course of a week. It might also be pretty helpful to the TAs when grading, although I’ve never done that kind of work before.
shrike | 18 hours ago
I found an old wiki from the early 2000s we built for an IRC channel back in the day. The source for the backend was partially lost and the storage format was one guy's personal invention.
Shoved Claude at it with "here's the DB, that's part of the code I managed to salvage" and it managed to export everything to markdown files.
There's exactly zero chance I would've ever gotten around to doing it myself.
kovboydan | a day ago
If Access is still a product I hope Google buys it, resurrects Wave, and buries Access.
Eji1700 | a day ago
Not wrong, but arguably, that's always been the goal.
The whole point of tech is to, eventually, get to where the business user can solve problems with their toolset. The problem of Excel/Access and now AI has always been that if you want to do that you have to design with rights and management FIRST, not second.
It is going to be a massive nightmare (already seeing it in my friends companies) and it almost certainly will lead to yet another wave of "so uhhh we don't know how this works, someone did it years ago, pls fix" hires, but the real "goal" is still IT managing rights and access. Not also building solutions with the department.