Unlock seamless workflows and faster delivery with our latest releases - Join the deep dive

Filtering out duplicate slugs after a large data import

13 replies
Last updated: Sep 8, 2020
Hi all. We've ran a large data import and there may be one or two duplicate slugs that have been generated. Any ideas on a way to filter those out??
Sep 8, 2020, 9:16 AM
I guess you could do a GROQ query with something like this:
Sep 8, 2020, 9:36 AM
*[_type == 'something']{
   _id,
   slug,
  "hasDuplicateSlug": length(*[_type == 'something' && slug.current == ^.slug.current && _id != ^._id]) > 0
}
Sep 8, 2020, 9:36 AM
Thanks for getting back to me, appreciate it! That came back as
true
for every
something
. I tried switching one of the
slug.currents
for simply
slug
but got the opposite.
Sep 8, 2020, 9:59 AM
Will keep tinkering...
Sep 8, 2020, 10:00 AM
I updated it recently
Sep 8, 2020, 10:00 AM
So you did... one sec
Sep 8, 2020, 10:00 AM
I forgot you had to filter out the current document
Sep 8, 2020, 10:01 AM
Also, I had put the ] in the wrong place, updated just now. Sorry about that
Sep 8, 2020, 10:02 AM
Ahh no need for apologies, I really appreciate the help. Unfortunately that timed out, there are about >4k entries we're running it against.
Sep 8, 2020, 10:07 AM
huh, try paginating it?

*[_type == 'something'][0..500]{
...
}
Sep 8, 2020, 10:10 AM
Great minds, that's exactly what I'm just running šŸ˜‰
Sep 8, 2020, 10:11 AM
That's the ticket, thanks
user J
- I really appreciate that šŸ‘
Sep 8, 2020, 10:14 AM
Ye!
Sep 8, 2020, 10:18 AM

Sanityā€“ build remarkable experiences at scale

Sanity is a modern headless CMS that treats content as data to power your digital business. Free to get started, and pay-as-you-go on all plans.

Was this answer helpful?