12+
Non-Artificial Intelligence

Бесплатный фрагмент - Non-Artificial Intelligence

Объем: 79 бумажных стр.

Формат: epub, fb2, pdfRead, mobi

Подробнее

Introduction

«We tried ChatGPT, we really like it. Just one thing — could you teach us how to ask it questions properly when we need to feed it a text in parts? We have these long cases, 3–4 pages, and it just won’t take them.»

That’s when I realize something is off. No, it’s definitely easier and more pleasant to work with people who already like the tool you’re teaching them to use. No need to overcome resistance or sell something you have no personal stake in. But here we are in February 2025, and ChatGPT-3.5 is a thing of the past. A 3–4 page case is no longer considered a «long» document — it’s a fairly small context that can easily be pasted into a prompt or uploaded as a file with most current models. So there’s no need to «feed» anything in parts. And no need to learn how to do that either. What you do need is understanding.

I ask to open the service they’re using. Turns out it’s one of those «ChatGPT made easy» platforms — Russian services that connect to models via the API. There’s the problem. A platform that limits context window size, cuts tokens in queries, and rolls out new features and model updates much slower than the official services. That’s because those very models and features are often still being tested within the official systems. But who cares about technical nuances that directly affect performance when someone promised a hassle-free experience with no sign-ups, no permissions, and no payments?

You know what the core issue is when it comes to adopting AI in the workplace? It’s that when you ask, «Which AI tool do you want to learn (or teach your staff to use)?» — the most common answer is: «Whichever one is free and doesn’t require a VPN.» And unfortunately, I’m not joking.

Because I — naively — still hope to hear something like: «One that works reliably with Excel spreadsheets,» or «One where we can upload entire books,» or «One that can respond in different roles,» or «One that makes the text sound human.» But no. The key requirement is that it be free. In other words, the goal isn’t defined by quality or results, but by the absence of a condition. Great. In that case, you might as well not even bother entering the process. Because amid all the techno-optimism and AI sales hype, people tend to forget one simple fact: proper, full-scale digital transformation — even in a small segment of work — is a major investment. And it’s not just about paying for the models themselves. It includes overhauling processes, training people, configuring models, and so much more. Everyone wants to ride the wave without investing or taking risks? That’s just not how it works. Digitalization is an investment. And investment always carries risk.

As harsh as it may sound, at the current level of technological development (and I’m not talking about today, but the next three, five, even ten years), humans remain either irreplaceable — or much cheaper — for completing a given task than artificial intelligence would be, assuming the business owner has even the faintest idea of the quality they need. If the only objective is something like «Write a fun post about our product with some emojis, something cool for the youth,» then yes, AI can replace us all in no time. But living in that kind of world, in that kind of information space, is not something I look forward to. Hopefully, neither do you.

But what I’ve just described is only one side of the current reality. There’s another.

I’ve been speaking with the team behind one of those very «AI in a single window, no foreign cards needed» services. A startup, early in its journey — ambitious, flexible, open to discussion, at least in theory. At the time, I was looking for ways to optimize work with neural networks. I thought we could really help each other. Through me, they’d learn what businesses need and what features are useful for real-world tasks. I’d get a solid Russian platform with access to all major models for my workflow. Not free, obviously, but no hoops to jump through — fully functional, with an adequate context window.

At the time, the project was on the brink of being frozen due to uncertain prospects. So initially, it seemed like we could help each other out, even without formal financial commitments. For me, it would make my work easier; for them, at the very least, they’d have a launchpad within my current company. Within a year, a thousand people might be using the tool. Within six months — two hundred. The key was to pass a pilot trial with an already-tested group. I often act as a kind of «business matchmaker,» so I saw no issues here — our interests were clearly aligned.

Then came the first functionality call. Keep in mind, this startup is already in the market, offering services with paid plans. Technically alive, though barely. And in this very first meeting, I explain that in order to approve a pilot, I need to know what the cost would be per month for a team of ten people. Of course, I have a basic description of how AI is currently used in this department: ten employees, five-day work week, active ChatGPT usage, about four hours per day, feeding documents of 10–15 pages, typically ten query steps per document. Just dry business numbers.

What do you think their answer was? Let me quote it directly:

«The problem is, neural networks aren’t measured in hours, or kilograms, or amperes. To calculate the cost, we need to know exactly how many tokens you’re using each day. Once you give us that number, we’ll figure out the monthly usage and apply our pricing.»

So I, a non-tech person (or really any business trying to understand how to use AI through their platform), am supposed to first learn what tokens are, how to count them, find some method to track daily usage — and then they’ll graciously multiply it by 20 or 22, apply the model’s API token cost, and add their markup. In case you didn’t know — ChatGPT currently doesn’t show usage stats in your account. I simply have no way of checking.

If you’re starting to feel sympathetic toward this team — who, after all, don’t know our usage patterns and are just external contractors — let me offer an analogy. Imagine if EdTech companies started rolling out courses like this: ten theory lectures, followed by the message, «Now you need practice. I don’t know which skills you’ll need at work, so figure it out yourself and go practice.»

Surely, you’d appreciate that thoughtful and tailored approach. Because how could the project team possibly know your specific situation… I trust you caught the sarcasm.

Yes, things like that happen. But they shouldn’t become the norm. Business, after all, is about understanding your niche. When you launch, the assumption is that you’re selling your experience and expertise to those who don’t have it. But if someone has to figure everything out on their own just to work with you… well, wouldn’t it be cheaper to simply have your own tech team build a sandbox for internal use? At least that way, you’re not paying a third party’s commission — since, you know, you’ve already figured it all out yourself.

And cases like these are not rare. But this book isn’t about business-building or even business communication, so let’s leave it at that.

That’s the second side of the story. And I had to become a bridge between those two worlds. Between techies who don’t understand business needs or its level of comprehension, and users who don’t get why a Telegram bot named ChatGPT doesn’t count as implementing AI at work. These aren’t just two sides of a business process. Not just different levels of tech literacy. They’re entirely different worldviews. And I’m not sharing this to get you to pick a side — but so that you can better see the reality in which we use neural networks. Or at least one fragment of it.

Of course, this book could have been written as a course. How to choose an AI model, how to talk to it, how to integrate it into your workflow. I’ve had plenty of practice making that kind of content — recording screencasts and designing curricula for online AI courses tailored to different professions. But let’s be honest. What’s the point?

Predictive models will be replaced by reasoning models, and prompt engineering will become obsolete. It already is, with every new model.

Someone might open this book hoping to learn how to generate realistic images, and be disappointed that all it says is «add «photorealism’ to your prompt.» And might miss the part right before it where I say it’s crucial to choose the right tool — not all models handle realistic rendering well, and most of them blur the image.

This field sees new tools emerge every day. They’re more powerful, better, faster, more accessible. New features make old techniques irrelevant. Where yesterday you had to break up your research into steps, today you just select «in-depth research» and enter the topic. Where you had to copy-paste paragraphs from a report, now you can upload the whole file. Where you had to prompt «step by step» and «explain your reasoning,» now you just turn on reasoning models. And that’s just the changes in text-based models over the past 18 months. Everything in this space is evolving rapidly — except the human.

In most cases, difficulties with neural networks don’t stem from their capabilities or from a lack of technical skills for writing prompts. That part is easier than it seems. If you could handle writing high school essays, you can probably write two paragraphs using the formula «Role + Goal + Task + Context + Examples and Counterexamples.» The real challenge lies elsewhere: in how people think when they encounter a new tool.

In how we perceive tasks, tools, and reality. In our ability — or inability — to set priorities. In how we approach our work and understand our own responsibilities. And even in whether we understand what we enjoy and what we absolutely loathe — and whether we act accordingly.

You know what I’ve noticed — in myself and everyone I’ve taught or worked with? We rarely get disappointed in the neural networks themselves. Most of us are reasonable enough not to expect a perfect result the moment we click «Generate.» But when we see what does come out, we often get disappointed — in our own ability to work with AI. Or in its ability to fulfill our lofty dreams and instant wishes. In how much effort it really takes to get the result we want. And that’s okay. But it’d still be nice to be disappointed a little less often. That’s not why we’re trying to master these tools.

Hopefully, you didn’t open this book expecting a clear, foolproof system that guarantees perfect results on the first try. Because this book won’t teach you how to write prompts (although, to be fair, I already gave you a basic formula). What we will do is learn how to use your own reasoning so well that you won’t need templates anymore. So you can talk to a neural network like a colleague or assistant — and get results.

Every chapter in this book was inspired by one of the questions I heard most often — from other people, from fellow professionals, or from myself. Questions that are now wrapped in myths and form the basis of mass misconceptions. And that, very often, we ask instead of the questions we should be asking. Because more often than not, we ask «how» when what we really need to stop and ask is «why.» I sincerely hope that by the end of this book, you’ll find the answers that truly matter to you. Not the ones in the headlines — but the ones that have been on your mind all along.

By the way, here’s a token estimation solution for that earlier case — just in case anyone needs it:

Per employee per day:

Token total for 10 queries in one session ≈ 79,000 tokens.

For 10 employees per day (approx. 4 sessions each): 79,000 × 10 × 4 = 3,160,000 tokens.

Per month: 3,160,000 × 22 ≈ 69,520,000 tokens.

With more intensive use, possibly twice as much: 139,040,000 tokens.

So: 70–140 million tokens per month for the company, 7–14 million per employee.

Still some uncertainty around input/output tokens — but at least there’s a starting point now.

Chapter 1. Are Neural Networks Just Another Hype?

The main argument I constantly encounter whenever the topic of artificial intelligence comes up is that it will vanish just as quickly as it appeared. That it’s just another tech fad, a digital pyramid scheme like crypto, NFTs, or the metaverse. And you know what? In many cases, I agree. The way AI is currently being used often doesn’t stand up to scrutiny.

Think about what appeared on our radar alongside the rise of publicly accessible generative neural networks. Social media posts written using prompts like: «You’re the best copywriter in the world, write a psychology article with actionable advice that everyone will share on Valentine’s Day.» Or images created by the DALL-E generator directly inside ChatGPT — blurry, repetitive drawings overloaded with grotesque details — used as post covers. By the time this book was published, ChatGPT had already replaced DALL-E with Sora Image, capable of producing images of much higher quality. That’s precisely why I believe it’s more important to learn how to work with neural networks in general than to master a specific tool — you could wake up tomorrow, open the interface, and not recognize it at all.

All of this, along with the automation of everything and everyone — client communication, job screening, content creation — will eventually become a thing of the past. Any practice driven by a frantic desire not to miss out on the «once-in-a-lifetime golden opportunity» fades within six months, or a couple of years at most for the most stubborn. With neural networks, I already see a decline in interest compared to the frenzy of the previous year. People tried them and realized, once again, they had been misled. Despite the promises, yet another tech innovation turned out to be just that — a tech innovation, not a money-printing button.

For those people, the neural network story will be over, at least for a while. But that doesn’t mean we should lump generative AI in with crypto, NFTs, or the metaverse. Even if it’s tempting. I myself, when I first tried ChatGPT-3.5 in Google Sheets and Google Docs, would alternate between being amazed by how easy it was to organize and draft content, and wanting to give up entirely after reading daily articles warning that all platforms would soon implement AI detectors and ban AI-generated content — leading to its quick extinction.

And no, don’t ask why I first integrated neural networks into those tools instead of using a user-friendly interface — I honestly don’t remember what drove me. Most likely, I simply found a clearer and easier guide for those platforms, so that’s where I started.

Now, if we think logically, we can identify one small but important similarity between crypto, NFTs, and the metaverse: they are metaphorical accessories. Let me explain.

In life, we all have certain core components — objects, processes, relationships, information, emotions. These form the structure of our daily existence. Then there are the extras, optional features that can be added in — like accessories or «ribbons.» Take overpriced takeaway coffee as an example.

Takeaway coffee is more popular among office workers than remote workers. The former can grab a cup on the way to work, during lunch, or on the way home. It fits seamlessly into their routine, making life a bit more enjoyable (whether genuinely or due to marketing is beside the point). For the latter, however, it requires a lifestyle change — getting dressed, leaving the house, strolling to the café, or bringing the coffee back home. That same coffee cup demands additional effort from them, requiring changes to their existing routines. It might even replace some of their established habits.

Whatever you did instead of going out for coffee — those were established parts of your life. So, for a remote worker, that coffee has to offer clear, undeniable benefit or joy (like getting them out for a walk). Only then does that «ribbon» become part of their life. Otherwise, it either gets discarded or becomes an irritating obligation.

Most recent technological trends — cryptocurrency hype, mining rigs on balconies, NFT investments — are «ribbons,» and not ones that fit many people’s lives. They appeal mostly to those deeply immersed in digital innovation or investing. In other words, they are technologies that inherently complicate life by adding extra layers of behavior and thus only attract a narrow niche.

It’s a different story, though, when we recall that (if you’re reading this book) you’ve lived through the past ten-plus years and seen real digital innovations firsthand — those that have truly reshaped our lives. Personal computers. The internet. Mobile phones. Laptops. Smartphones. Social networks. You can probably name others, but those are the most obvious ones for me.

We’ve gone from cassette tapes to streaming services. Each stage — cassettes, DVDs, flash drives, online movies — briefly but clearly took its place in our lives. And if we consider marketplaces and food delivery services as socio-technological innovations… well, you get the idea.

These innovations are not like the ones we just discussed. They don’t sit on top of our lives — they transform the very fabric of how we do things, making it all simpler and more convenient. You no longer need a payphone — you have a mobile. You don’t need to visit libraries or video stores — books and movies are online. You’re not tied to a place — you can take your laptop on a trip or just to another room. These aren’t «ribbons»; they’re technologies that address needs we already have.

Generative neural networks should be grouped with the internet and smartphones — not NFTs and crypto. We already create content: we write posts, emails, and reports. We choose images for presentations. We look for music to set the mood at events. We brainstorm and shoot social media videos. We analyze graphs and spreadsheets. We look up answers online — not just facts, but also forum threads and articles like «how to survive a breakup» or «what to check when buying a used car.» AI is just a tool that lets us do all of this differently. Sometimes faster. Sometimes better and more personalized.

Let’s be honest. Are you really sure you want to go back to digging through stock photo sites for the right image instead of just describing what you want to an AI? That you’d rather puzzle over which friend to ask for advice than run your question through a neural net? That you’re willing to draft every text from scratch without even using an AI-generated version as a rough draft? If you answered «yes,» chances are you’ve either never used neural networks for those tasks or only tried them very superficially. Because going back to the old, more complicated ways of doing everyday tasks is inconvenient — like shopping for every little item in malls and markets instead of just ordering what you need from the nearest pickup point.

But there’s something else worth saying. People often start to see neural networks differently once they understand the real history behind generative AI.

The truth is, this isn’t something that just appeared in the last ten years — or even this millennium. It’s the result of decades of global scientific work, officially kicked off by the Dartmouth Conference in 1956. In short, that’s when researchers discussed the idea of creating programs capable of working with natural human language — not just programming languages — and generating varied responses to a changing environment. In other words, systems that could respond to context and nuance.

What we see today is the result of nearly seventy years of development, finally accessible to the public thanks to massive computational power. This isn’t some overnight innovation — it’s the culmination of long-term strategic research and expensive trial and error. And if you pay attention to how AI is evolving today, the models aren’t making miraculous quantum leaps. Sure, IT folks tell us that predictive models are being replaced by reasoning ones, and that AI agents are the new frontier in task-solving… But for us users, it looks a bit different. For us, the neural network is simply browsing more websites to answer our questions. It’s writing texts in a more natural style with fewer odd artifacts. It’s rendering textures more accurately and — finally! — can place text on images correctly. It lets us better control camera movements in video generation, even if it might still produce a frame that looks like surrealist horror when you just asked it to make a character wave and smile.

It’s simply getting better at doing the same things we started using it for back in fall 2022.

So, expecting AI to suddenly vanish from our lives — or to evolve into sentient humanoids within a couple of years — probably isn’t realistic. This is a different story, unfolding at a very different pace.

Chapter 2. Will AI Make Us Dumber?

There are many people who fear artificial intelligence precisely because they believe it could make them dumber. Sometimes it seems as if they imagine AI as a kind of monster that will one day burn all the books in their libraries (if they even have any?), ban thinking, reasoning, and observing reality, and instead force everyone to consult it about every little thing — whether they want to or not. And even if they themselves have enough willpower to resist this monster, the so-called «younger generation» will surely fall victim to it, and the world will suffer from mass intellectual degradation.

This is, of course, an ironic take. But the tone of such discussions often evokes exactly that impression. Still, there are a couple of important nuances. First — I’m an optimist. I believe in the power and strength of human will and in people’s natural curiosity about life. In their drive to creatively reinterpret and construct their own vision of reality. And people who possess those qualities quickly grow bored with the generic responses of neural networks.

Second — I’m also a tech realist. And I know that, in fact, artificial intelligence has already enslaved humanity. Why everyone is suddenly panicking now is beyond me. Up until recently, we all lived quite comfortably in a world where AI was already limiting our development and thinking, and no one seemed to mind. But as soon as its generative sibling entered the scene, it started attracting an unforgivable amount of attention and became the villain of the story. It’s all rather amusing.

Let me explain. But first — just one question. Would you really voluntarily subscribe to social media accounts or consume content created by artificial intelligence? Even if it’s accurate and high-quality. I emphasize: voluntarily.

I wouldn’t. I mean, I don’t mind AI writing encyclopedic articles based on verified sources and reviewed by experts in the field. It’s quite likely that such articles would be clearer and more engaging without compromising content. But when it comes to blogs, opinion pieces, and social media, I want to hear from a human being. Someone with a living, multidimensional point of view. And I wouldn’t choose to engage with AI-generated content unless it was part of an experiment exploring its capabilities. Yet despite that, every single day I read articles, watch videos, or view photos created by neural networks. Every day. All because of that very same generative «sibling.»

Just to clarify: artificial intelligence isn’t limited to generative neural networks. It doesn’t even refer solely to neural networks in general (for instance, image search engines also rely on neural network technology). AI is an entire class of technologies.

Of course, in everyday conversation we tend to use the terms «artificial intelligence,» «neural networks,» «AI,» «generative models,» and the like as synonyms. Non-tech people quickly reached a general consensus that there’s no need to distinguish between them. Whatever the term used, we’re all referring to the same hype topic — generative neural networks. Even I use all these terms interchangeably in this book. But from a professional standpoint, there’s a huge difference. Generative neural networks are only a subset of neural networks, which themselves are only a subset of artificial intelligence. And it’s this expert distinction that helps explain why we’re actually afraid of the wrong thing.

Here’s the truth: the recommendation algorithms behind search engines and social networks… Yes! Those are also AI. And they’ve been controlling all of us — our consciousness, worldview, and the limits of our intellectual development — for quite some time now.

Try finding something fundamentally new to you on the internet — or better yet, on social media. Something completely unrelated to your current interests. If you did manage to find it, it was probably only because you made a very specific, direct search. Right? And even then, the results were probably still somehow tied to your existing interests.

Say, for example, a fan of celebrity gossip wants to learn about quantum physics. After the neat academic definition polished by the neural network in the search engine, they’ll most likely come across an article by someone like «Maria of the Universe» explaining how she arranged her furniture based on the principles of quantum physics — or how those same principles underpin her new breathing retreat. Never mind that it’s nonsense. What matters is that the nonsense makes quantum physics feel highly relevant to your personal interests.

That’s the part that should be scary. Yes, generative AI is the rock star — everyone’s talking about it, and only the lazy haven’t formed an opinion. But behind it, laughing quietly, are the recommendation algorithms — the producers with suitcases full of millions — who are actually running the entire human information sphere.

So, in a broader sense, we remain stuck in a world where we face reality armed only with our narrow little point of view. With our limited mental framework. A pathetic pale-green flashlight beam that makes everything it shines on appear green.

And this is a real problem. The potential for intellectual growth in an adult is concentrated within their zone of proximal development. A child can absorb entirely new information about the world without much effort. An adult already has a mental framework — a worldview — into which new knowledge must be integrated. And integration only happens if that knowledge connects with something they already know. The narrower a person’s interests, the fewer such points of connection they’ll have. And no matter how deeply they dive into a given topic, even if new information somehow reaches them, they may fail to understand it due to a lack of foundational knowledge and analogies — or they may forget it because they have no «mental shelf» to store it on.

Recommendation algorithms, in their relentless effort to perfectly align with a person’s interests, keep narrowing the scope of content they deliver. Today they learn you like people-focused content and show you articles about people. Tomorrow — specifically, about psychology. The next day — relationship psychology. And by the end of the week, your feed is entirely made up of posts like «How Long Should You Wait to Start a New Relationship After a Breakup at 35+,» just with different headlines. Because the algorithm has figured out exactly what hooks your attention.

Naturally, this prevents the person from seeing their situation from a broader perspective. Articles suggesting that it might be worth repairing old relationships, or reflecting deeply on what happened, or exploring other ways to enrich your life — those simply never make it through. Because the algorithm has not only figured out what topic you like, but which angle keeps you engaged the longest. And it exploits that. The person, in turn, starts to think there’s only one right scenario — after all, that’s all they ever see.

But the narrowing of our developmental zone and the distorted worldview that results from it aren’t the only consequences of humanity’s AI capture. There’s one more: the loss of connection with reality. Including alienation from other people.

Social networks were originally created to maintain human connection, share information, and foster communication. But over time, they adopted a broadcast model of communication, similar to traditional media: one broadcaster, many consumers. The only difference is that consumers can leave comments. But fundamentally, the context is shaped by content creators, and the conversation revolves around their perspectives — not the objective facts of reality.

To put it more simply: back in the 2000s, you couldn’t help but hear about Star Factory, The Last Hero, or Wheel of Fortune. Even if you never watched them, just being around other people meant you were aware of them. These were part of the cultural zeitgeist, touchstones that everyone more or less recognized. Same with iconic celebrities, actors, singers. Big sports events. It was hard to avoid what didn’t interest you personally — your favorite program would advertise another, one you’d otherwise never hear about. Yes, teens and retirees, professionals and homemakers, singles and families might poke fun at each other’s tastes — but at least they knew what those tastes were.

Today, that’s nearly vanished. Mass culture barely seems to exist. Recommendation algorithms and the shift of media to the internet have created an environment where everyone can sit inside their own informational bubble, completely unaware of what’s going on in the wider world. Of how other people live or communicate. Even the jokes about boomers, millennials, and zoomers often reflect how this ability to isolate ourselves has amplified our differences. Online media only show us people like ourselves and reinforce our views. And no incognito mode can change that.

We’re stuck in perfectly sealed information bubbles created by recommendation systems. And without curiosity, without a desire to understand the diversity of the world and its perspectives, without a deliberate effort to break through the bubble — not only is escape impossible, even making a tiny hole in it is out of reach. The only way is to intentionally seek out real people who are different from you. To look for them. To open books and films you would never have chosen. To purposely explore online storefronts, magazines, platforms, and accounts with different perspectives — or better yet, multiple different perspectives. To actively expand your bubble from within by filling it with new, diverse information. To consciously curate its formation.

We have to fight back against the domination of recommendation-based AI. Because it does make us dumber. Or at least, more narrow-minded. It disconnects us from reality. It builds an absurdly narrow and biased view of the world, people, and society. And we must do everything we can to resist it. To consciously design our own development strategy. Even if it’s informal. Just as a thinking person. Just as a Human.

Chapter 3. How to Create Texts with AI?

In a way, generative AI isn’t all that bad. Yes, it hallucinates — and does so quite often. It invents facts and events, books and movies. It makes mistakes in calculations and when offering advice. But in some ways, it’s the direct opposite of recommendation algorithms. Though it’s hard to say which is worse — personal misconceptions or collective ones.

Collective misconceptions form the very foundation of what artificial intelligence is trained on, and what it reproduces in every one of its responses. That’s precisely why many activists oppose the use of AI models in critical areas of life — to prevent it from amplifying racial and gender stereotypes in the justice system or relying on common medical errors when diagnosing patients. Because even a perfectly trained AI that consistently produces the «right» answers and never goes off track still doesn’t act based on objective reality (it knows nothing about it), but instead on a sort of averaged collective notion of it.

During training, AI doesn’t memorize facts. It’s not a massive search engine drawing from an internal library. It’s a program that identifies patterns in certain types of texts and reproduces them when a user prompt touches on that area. It doesn’t «know» anything.

Imagine you’re playing a game where you have to answer every question you’re asked — but saying «I don’t know» is not allowed.

First question: Explain the essence and principles of quantum physics. Your mind will probably dredge up something about subatomic particles, how they behave according to different laws than macro-objects, and so on. You might not say anything outright false, but a professional physicist would probably be horrified by your explanation. Because it’s a niche topic, and you likely haven’t encountered much information on it in your life.

Second question: What are social networks and what are they for? Here, you’d likely do much better — you can explain what they are, why people use them, maybe even give some concrete examples. That’s because your «training data» — the information you’ve absorbed throughout your life — is much richer on this topic. You more or less get what it’s about. And even so, a professional social media manager would probably smirk at your generalizations about which platforms are trending or what features are most popular.

Бесплатный фрагмент закончился.

Купите книгу, чтобы продолжить чтение.