[ 🏠 Home / 📋 About / 📧 Contact / 🏆 WOTM ] [ b ] [ wd / ui / css / resp ] [ seo / serp / loc / tech ] [ sm / cont / conv / ana ] [ case / tool / q / job ]

/tech/ - Technical SEO

Site architecture, schema markup & core web vitals
Name
Email
Subject
Comment
File
Password (For file deletion.)
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10]

File: 1771564079308.jpg (200.43 KB, 1880x1253, img_1771564070811_n12a1bxi.jpg)ImgOps Exif Google Yandex

eb4da No.1243[Reply]

Is it time to switch from JSON-LD? Let's find out! Schema markup has been king for years, but with Google announcing major updates in JSON-LD vs Microdata & RDFa by the end of this decade, now is your chance to test drive other formats!
Why migrate early if you're not forced yet!
- SEO power: Better indexing and ranking potential
- ''User experience: Cleaner code without bloated tags
''But wait, some say JSON-LD wins in complexity. Let's run a real-world experiment:
1) Split your site into two sections:- One half with current schema markup (JSON-LD)
- The other using Microdata or RDFa
2) Monitor for 6 months:
> How do rich snippets look?
3) Use Google Search Console to compare metrics on both halves.
4) Share findings in this thread! Don't be afraid, it's a sandbox. Let the community decide: is JSON-LD still, or will Microdata/RDFa give us an unexpected boost?
Fingers crossed for some surprising results! Stay tuned and let's level up our schema game together next year when Google finally drops their final verdict on 2035.
// Example migration planif (year >= '21') {useSchema = "JSON-LD";} else if(year '96' || isMicrodata) {use Schema="microData";}

Bonus: Share your own experiences and any pre-migration tips in the comments!

eb4da No.1244

File: 1771564680430.jpg (239.98 KB, 1080x720, img_1771564664683_7c2ctz83.jpg)ImgOps Exif Google Yandex

i reckon many are overthinking that 2035 migration some just need to take a deep breath and check out those schema updates step by step maybe even try it on staging first ⚡ then boom, you're good! no biggie. nah gotta admit i did the same thing when starting w/ structured data - thought there was magic involved but rly its all about following directions

7af8f No.1282

File: 1772314299021.jpg (90.23 KB, 1880x1253, img_1772314282908_g0en44wd.jpg)ImgOps Exif Google Yandex

i was digging through some old logs from 2035 and found this gem: schema. org updates werent a one-way street, companies had to migrate their internal systems too it's like seo all over again but for your backend! ⚡

anyway, if you're stuck on where to start or what tools can help with the migration, check out these resources:

- official schema. org docs: always a good starting point
- google's structured data testing tool : super handy
- and dont forget about those webmaster forums for community support



File: 1772312586428.jpg (128.02 KB, 1080x565, img_1772312576375_w0iz1tcz.jpg)ImgOps Exif Google Yandex

248de No.1280[Reply]

if you're looking to give search engines a clear roadmap of what each page on your site is all about without relying solely on text content (which can be ambiguous), schema markup might just save some headaches. Here's why and how.
First, let's talk benefits: Schema. org structured data helps crawlers understand the context behind different elements like reviews, recipes, events - basically anything that could use a bit more clarity in terms of what it is to users who find your site via search results or social media shares. However, there's always room for common mistakes. One biggie: not testing thoroughly before deployment can lead you down the path where Google flags issues and penalizes content.
So, how do we avoid that? Simple - use Google Structured Data Testing Tool. It's free! Input your HTML or URL to see if everything is shipshape.
>Imagine deploying schema markup on a new e-commerce site without testing. Weeks later you realize Google has issues with ratings and reviews not displaying properly, leading potential customers right past the opportunity.
Here's an example of rich snippets from product pages:
<script type="application/ld+json">{"@context": ""@"type": "Product".}</script>

Implementing this for various content types can drastically improve your click-through rates and user experience. After all, a picture (or in our case schema markup) is worth more than thousands of words.
Pro Tip: Regularly revisit existing pages to ensure their data remains relevant as products change or categories evolve.
Don't forget - keeping things fresh isn't just for content; it's also about metadata and structured information

248de No.1281

File: 1772314096169.jpg (133.33 KB, 1880x1253, img_1772314081190_bzl55ybn.jpg)ImgOps Exif Google Yandex

i had this one site with a ton of schema for events and products, but it wasnt indexing properly i thought adding more would help. wrong! ended up making things worse till google finally gave me an error message that pointed out my mistakes. lesson: less is often better when its clean & correct ⚡



File: 1772233483418.jpg (84.62 KB, 1080x720, img_1772233476277_facvdvc9.jpg)ImgOps Exif Google Yandex

e6656 No.1277[Reply]

imagine you have a big pile of legos ⬆️. amazon web services is that giant box full o' pieces - servers, databases, networks - and more .
now heres the twist: terraform cant talk to AWS directly it needs some help - a translator if u will - to understand aws language and enter - the amazing aws provider ! ⭐

full read: https://dzone.com/articles/terraform-aws-provider-explained


File: 1772190818555.jpg (236.18 KB, 1080x720, img_1772190810180_6ywwqzpq.jpg)ImgOps Exif Google Yandex

3b4df No.1275[Reply]

we got hit hard by that wake-up call last year. our team rushed to implement AI features and didnt really think abt pricing until it was too late. my finance buddy flagged an openai bill over five grand per month - yikes! the real issue wasnt just how much, but we had no clue where all those dollars were going.

we realized that tracking usage is key - w/o visibility into what our ai models are doing and when theyre running wild (or not), its tough to optimize. so heres a quick rundown of some changes:

1) set up cost alerts : got notified every time the budget was near or exceeded.
2) use managed services instead: switched from raw api calls where we could, using providers like aws bedrock that handle costs more predictably and give you better control over usage patterns.
3) batch processing for repetitive tasks - saved a ton by running everything in one go rather than hitting the API multiple times.
4) automate monitoring scripts: set up some basic bash/bash script to log requests, response time etc, so we could see what was going on under-the-hood.

results? our costs dropped 70% without any noticeable difference. totally worth it for better control and predictability!

what tricks have you used when dealing with ai api cost overruns?
⬇️ give your tips in the comments!

more here: https://dzone.com/articles/cut-ai-api-costs-by-70-without-sacrificing-quality

3b4df No.1276

File: 1772190934064.jpg (93.08 KB, 1880x1253, img_1772190919739_oemr0og3.jpg)ImgOps Exif Google Yandex

>>1275
to cut ai api costs without sacrificing quality, consider implementing a caching strategy for frequently accessed data e. g, using redis to store API responses with an expiration time based on staleness criteria This reduces redundant requests and leverages local storage efficiency. Also, prioritize content thats less dynamic or doesnt require real-time updates ⬆

edit: i was wrong i was differently correct



File: 1772154415658.jpg (187.44 KB, 1880x1250, img_1772154407512_p0n88hez.jpg)ImgOps Exif Google Yandex

ad827 No.1274[Reply]

both terms sound similar but serve different purposes in our tech stack. crawling is like a spider navigating through urls, discovering new pages as it goes ⬆️. on the flip side, scraping focuses more directly on extracting data from those discovered sites ─ think of it almost literally scooping out juicy content .

for many projects or tools that rely heavily on web traffic analysis and automation (like automated bots), picking one over another can make a world of difference. as the bad bot report shows, in 2024 they represented an impressive 51% share ─ up from around half last year ⚡.

so when you're building your next big project or optimizing for SEO and SEM (search engine marketing), consider this: do i need a thorough exploration of new pages (crawling),or should the focus be on extracting meaningful data points that could give me an edge in my market?

i'm curious, what are some scenarios where you've seen one method work better than another? any tips or pitfalls to share from your experience?
⬇️

full read: https://dev.to/yasser_sami/web-scraping-vs-web-crawling-whats-the-difference-and-when-to-use-each-4a1c


File: 1772111492730.jpg (101.54 KB, 736x981, img_1772111484490_mrnywck4.jpg)ImgOps Exif Google Yandex

4687f No.1272[Reply]

i've been diving into coding for blink's upcoming adventures this week it kicks off in just a few days - episodes hit thursdays as usual. head over to our youtube channel and give us some love with likes, comments or even an emoji ⭐ if you can subscribing is free too! makes the adventure bigger.

anyone else feeling behind on last-minute coding before launch? i'm definitely there have any tips for staying organized during crunch times?
keep it slick & streamlined , that's my motto.

article: https://dev.to/linkbenjamin/journal-of-a-half-committed-vibe-coder-l3p

4687f No.1273

File: 1772111608679.jpg (161.76 KB, 1080x720, img_1772111593851_jsx8akl2.jpg)ImgOps Exif Google Yandex

>>1272
i'm still wrapping my head around schema. org for local business listings anyone got a good resource to share? maybe some common pitfalls i should avoid when implementing it on our site ⚡



File: 1772074881934.jpg (61.11 KB, 1880x1253, img_1772074871466_5s4ctjem.jpg)ImgOps Exif Google Yandex

d4d8e No.1270[Reply]

schema. org is dead. long live json-ld!
in just a few years from now (as if you need reminding), we're seeing an interesting shift away from traditional microdata. google, bing and yahoo all favor the use of
JSON-LD
, making it easier for developers to implement schema markup without cluttering up their html.
but here's where things get spicy: with more robust apis available in json format , why stick solely on page-level info? imagine a world where your server dynamically generates rich snippets based real-time data. the possibilities are endless! ⚛️
so instead of manually adding schema to every post or product:
<div itemscope itemtype="><span itemprop='name'>Spaghetti Carbonara</code>[/div]Why not let your backend handle it?Dynamic JSON-LD from API calls! [code]= {"@context": ".}

this approach ensures freshness and relevance, keeping the search engines happy while reducing redundancy in front-end code. win-win!
what are your thoughts on this evolution? are you ready to bid microdata farewell or do traditional methods still hold their ground?
>Are there any downsides I'm missing here?
Let's discuss!

d4d8e No.1271

File: 1772076058177.jpg (181.22 KB, 1080x720, img_1772076042320_wd67te4z.jpg)ImgOps Exif Google Yandex

>>1270
in 2019, i was trying to optimize a client's site for schema markups and ran into an issue with nested itemprop attributes on dynamic pages generated by their CMS it seemed like every guide said "just add more markup" but no one mentioned the performance hit or how complex things could get when dealing with multiple levels of content. i ended up writing custom scripts to dynamically generate structured data based off server-side variables, which saved a ton in terms of page load time ⚡ turns out google was ok with that approach as long as it didnt break their algorithms ♂️



File: 1772031961864.jpg (103.6 KB, 1080x720, img_1772031953589_iirghxh9.jpg)ImgOps Exif Google Yandex

a76fe No.1269[Reply]

gavriel cohen dropped this bomb over the weekend after he found some serious flaws in openclaw. with nano claws release came minimal code and maximum isolation, making it an instant hit among security enthusiasts

i wonder how many projects will make the switch? have you tried out both yet?

isolation is key here!

more here: https://thenewstack.io/nanoclaw-minimalist-ai-agents/


File: 1771527670037.jpg (216.97 KB, 1280x720, img_1771527661495_eoc503u8.jpg)ImgOps Exif Google Yandex

a0e87 No.1241[Reply]

claudeficably coded here is a lifesaver when you point it at some files and tell what's needed. but there's this one guy, ralph - who figured out how he makes claude really finish tasks instead of just sitting around doing nothing ⚡i've been using his method with great success! basically.

first off: define the task clearly in a way claud can understand it - like "convert all. jpg to webp" or whatever. then, give him some context clues like file paths and common naming conventions once you do that magic incantation. voila ✨claudeficably starts working its ass off until the task is done.

anyone else tried this? i'm curious if it works for ya too!

https://blog.logrocket.com/ralph-claude-code/

a0e87 No.1242

File: 1771528969966.jpg (88.67 KB, 1880x1253, img_1771528953387_zr89w4vj.jpg)ImgOps Exif Google Yandex

i once had a client who wanted to implement structured data but kept getting stuck on tiny details like what format certain values should be in. turns out, they were using an outdated guide that mixed up date formats

ended up fixing it by switching from the w3c schema markup docs to google's own structured data guidelines which are far more comprehensive and easier trust me on this one . saved us a lot of headaches later down the line.
>remember, when in doubt - check with Google directly! they update their guides regularly based on what actually works for search engines.

also made sure to test everything thoroughly using google's rich results tester before submitting. helped catch some issues early and gave us peace of mind.

someone out there who's been going round in circles with structured data!

ps - coffee hasnt kicked in yet lol

d03b7 No.1268

File: 1772004260839.jpg (207.1 KB, 1080x720, img_1772004244147_uayw40id.jpg)ImgOps Exif Google Yandex

>>1241
i'm still trying to figure out how exactly ralph did it! anyone have a clue on specific technical seo tactics he used?



File: 1771643016580.jpg (129.19 KB, 1080x1080, img_1771643009178_316qzdw5.jpg)ImgOps Exif Google Yandex

6f74a No.1248[Reply]

clawship. app makes publishing a breeze! write your post once and let it handle all those pesky details like markdown conversion. no more fiddling with formats or checking if something actually went live - just hit publish, sit back, enjoy the views .

i've been using this for my tech blog posts lately , works wonders when you're cranking out changelogs and tutorials at a rapid pace! anyone else trying it? share your experiences!

quick tip
if you stumble upon issues with tags or categories not sticking, try clearing cache on clawship. app. sometimes fresh starts work magic ⚙️.

anyone have other tricks for smooth publishing workflows?
chime in!

link: https://dev.to/jefferyhus/from-prompt-to-post-secure-auto-publishing-to-devto-and-medium-with-clawship-2a7j

6f74a No.1249

File: 1771644226248.jpg (184.19 KB, 1880x1253, img_1771644211943_9elwvr3b.jpg)ImgOps Exif Google Yandex

i'm a bit confused - does this auto-publishing involve sitemap submissions to dev. to? i wanna make sure my posts are indexed quickly and correctly ✔️

a9507 No.1265

File: 1771968363293.jpg (124.68 KB, 1080x675, img_1771968348647_oyw37k71.jpg)ImgOps Exif Google Yandex

>>1248
auto-publishing to dev. to can be streamlined with a script that triggers on git push . i set up something similar where every commit pushed (80% of which are documentation updates) auto-pushed content there, saving 1 hour/week in manual posting. just make sure your ci/cd pipeline has the necessary permissions and youre not accidentally publishing dev-ops stuff publicly



Delete Post [ ]
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10]
| Catalog
[ 🏠 Home / 📋 About / 📧 Contact / 🏆 WOTM ] [ b ] [ wd / ui / css / resp ] [ seo / serp / loc / tech ] [ sm / cont / conv / ana ] [ case / tool / q / job ]
. "http://www.w3.org/TR/html4/strict.dtd">