Hello, I see on your resume that you use Redis in your projects, why do you use Redis?
I can’t help but curse in my heart, this is called what problem, everyone is not using this well, but you can’t say it.
Seriously replied: Handsome and charming interviewerHello, because the traditional relational database such as Mysql can no longer be applied to all the scenarios, such as seconds of inventory deduction, APP home page of the peak of the access traffic and so on, are very easy to break the database, so the introduction of caching middleware, the more commonly used on the market at present caching middleware are Redis and Memcached However, neutralization After considering their advantages and disadvantages, we finally chose Redis.
As for a more detailed comparison of friends remember to check the difference between Redis and Memcached, such as the advantages and disadvantages of the two comparisons and their respective scenarios, the follow-up I have time will also write out.
So, young man, let me ask you again, what are the data structures in Redis?
String, Hash, List, Set, SortedSet.
Here I believe 99% of the readers can answer up Redis 5 basic data types. If you can not answer the small partners we have to refuel to make up for yo, we know the five types of the most suitable scenarios better.
However, if you are an intermediate to advanced user of Redis and you want to highlight what makes you different from other candidates in this interview, you also need to add the following data structures HyperLogLog, Geo, Pub/Sub.
If you still want to add points, then you say that you have also played with Redis Module, like BloomFilter, RedisSearch, Redis-ML, at this time the interviewer’s eyes began to light up, thinking that this young man has something ah.
Note: I answered to Redis-related questions in the interview, often mentioned BloomFilter (Bloom Filter) the use of this thing is really more than the scene, and use is really fragrant, the principle is also good to understand, read the article can be in front of the interviewer to talk about it, does not smell it? Below the portal ↓
If there are a large number of keys that need to be set to expire at the same time, what do I generally need to be aware of?
If a large number of key expiration time set too centralized, to the expiration of that point in time, Redis may appear brief lag phenomenon. Seriously, there will be a cache avalanche, we generally need to add a random value to the time, making the expiration time spread out a bit.
E-commerce home page will often use a timed task to refresh the cache, may be a large number of data expiration time are very concentrated, if the expiration time is the same, but also just in the expiration of the time point of a large number of users to influx, it may cause cache avalanche
So have you used Redis Distributed Locking, what’s it all about?
First you take setnx to contend for the lock, then you use expire to add an expiration time to the lock to prevent it from forgetting to release.
At this point the person will tell you that you answered well and then go on to ask what happens if the process unexpectedly crashes or has to be restarted for maintenance before executing the expire after the setnx.
At this point you need to give the surprised feedback: alas, yes, this lock will never be released. Then you need to scratch your head, pretend to think for a moment, as if the next result is something you’re actively thinking about, and then answer: I remember the set instruction has very complex parameters, and this is supposed to be able to combine setnx and expire into a single instruction at the same time!
At this point the other party will show a smile and start to think: Well, this kid is not bad, it’s starting to get interesting. If there are 100 million keys in Redis, and 10w of them start with a fixed known prefix, how do you find them all?
Use the keys command to sweep out a list of keys for a given pattern.
The other party then followed up by asking: what would be the problem with using the keys directive if this redis is serving an online business?
This time you have to answer a key feature of Redis: Redis is single-threaded. keys directive will cause the thread to block for a period of time, the online service will stop until the directive execution is complete, the service can be resumed. This time you can use the scan instruction, the scan instruction can be non-blocking extraction of the specified pattern of the key list, but there will be a certain probability of duplication, in the client to do a de-duplication on it, but the overall time spent will be longer than the direct use of keys instruction.
However, incremental iteration commands are not without their drawbacks: for example, using the SMEMBERS command returns all the elements currently contained in the set key, but for incremental iteration commands such as SCAN, the incremental iteration commands provide limited guarantees on the elements returned because the key may be modified during the incremental iteration over the key.
Ever used Redis for asynchronous queues, how did you use it?
Generally use the list structure as a queue, rpush to produce messages, lpop to consume messages. When lpop doesn’t have a message, it has to sleep appropriately for a while before retrying.
If the other person asks if it is possible not to use SLEEP?
The list also has an instruction called blpop, which blocks until a message arrives when there is no message.
What if the other party then pursues the question of whether it is possible to produce once and consume many times?
Using the pub/sub topic subscriber pattern, a 1:N message queue can be implemented.
If the other person continues to press what are the disadvantages of pub/su b?
In case the consumer goes offline, the produced messages will be lost and specialized message queues such as RocketMQ will have to be used.
If the other side of the pedantic TM pursues the question of how Redis implements delayed queues?
This set of moves down, I guess now you want to kill the interviewer a bat (the interviewer himself would like to kill themselves how to ask so many they do not know), if you have a baseball bat in hand, but you are very restrained. Calm down your excitement, and then calmly answer: use sortedset, take the timestamp as the score, the message content as the key to call zadd to produce the message, the consumer with zrangebyscore instruction to get N seconds ago data polling for processing.
By this point, the interviewer has secretly given you a thumbs up. And has silently given you an A+, but what he doesn’t know is that at this moment you have your middle finger up, behind your chair.
How does Redis persist? How do service master and slave data interact?
RDB does mirrored full persistence and AOF does incremental persistence. Because RDB can take a long time, is not real-time enough, and can lead to a lot of lost data during downtime, AOF is needed to work with it. When the redis instance is restarted, the RDB persistence file will be used to reconstruct the memory, and then AOF will be used to replay the recent operation commands to achieve a complete restoration of the state before the restart.
Here is a good understanding of the RDB understood as a whole table of the full amount of data, AOF understood as the log of each operation on the server restart when the first table of all the data to get into, but he may not be complete, you then play back the log, the data is not complete well. However, the mechanism of Redis itself is AOF persistence on and the existence of AOF file, priority load AOF file; AOF off or AOF file does not exist, load RDB file; load AOF/RDB file city, Redis startup success; AOF/RDB file there is an error, Redis startup fails and prints an error message!
The other person followed up by asking what would happen if the machine suddenly lost power.
Depending on the configuration of the AOF log sync attribute, if you don’t require performance, you will not lose data if you sync the disk on every write command. But in the high-performance requirements every time the sync is unrealistic, generally use timed sync, such as 1s1 times, this time at most 1s of data will be lost.
The other person followed up by asking what the principle of RDB is.
You give two terms on it, fork and cow. fork means redis performs RDB operations by creating child processes, and cow means copy on write. after the child process is created, the parent and child processes share data segments, and the parent process continues to provide read and write services, and the dirty page data written will gradually be separated from the child process.
Note: to answer this question, if you can also say the advantages and disadvantages of AOF and RDB, I think I’m the interviewer on this issue I will give you credit, the two in fact, the difference between the two is still very large, and involves Redis cluster data synchronization issues and so on. Want to understand the partner can also leave a message, I will write a special to introduce.
What are the benefits of pipeline and why should I use it?
It is possible to reduce the time of multiple IO roundtrips to a single one, provided that there is no causal correlation between the instructions executed by the pipeline. An important factor affecting the peak QPS of redis is the number of instructions in the pipeline batch, as can be seen when using redis-benchmark for pressure testing.
Do you know anything about Redis’ synchronization mechanism?
Redis can use master-slave synchronization, slave-slave synchronization. The first synchronization, the master node to do a bgsave, and at the same time the subsequent modification of the operation record to the memory buffer, to be completed after the full synchronization of the RDB file to the replica node, the replica node accepts the completion of the RDB image will be loaded into memory. After loading is completed, the master node is notified to synchronize the modified operation records during the period to the replica node for replay to complete the synchronization process. Subsequent incremental data can be synchronized through the AOF log, somewhat similar to the database binlog.
Have you used Redis clusters, how is high availability of clusters ensured and what is the principle of clustering?
Redis Sentinal focuses on high availability, automatically promoting a slave to master when the master is down to continue providing services.
Redis Cluster is aimed at scalability, and uses clusters to slice and dice storage when there is not enough memory in a single redis.
End of interview
You can do it, kid. When do you have time to come to work? Why don’t you come tomorrow?
You force yourself to be calm, so urgent ah I still need to rent a room, or not next Monday.
OK I thought this kid is so NB is not a lot of Offers in hand, no I have to ask hr to give him a raise.
You can’t help but give yourself credit for making it to the end!
(Cue the likes, every time I read it and don’t like it, are you guys trying to whore me out? You guys are bad, but I like it).
In the technical interview, whether it is Redis or whatever the problem, if you can give practical examples, or directly say their own development process problems and gains will give the interviewer’s impression will add a lot of points, the answer to the logic should also be a little stronger, do not east a little west a little bit, easy to put themselves around the dizzy.
Another thing is that when I ask you why you use Redis you don’t just come up with a straight answer to the question, you can answer it like this:
Handsome interviewerHello, first of all, our project DB encountered a bottleneck, especially seconds and hot data such as the scene DB basically can not carry it, then you need to join the cache middleware, currently on the market there are caching middleware Redis and Memcached , their advantages and disadvantages ……, synthesize these and then combined with our project characteristics, and finally who we chose at the time of technical selection.
If you answer my questions in such an organized and logical way and also say so many knowledge points outside my questions, I will feel that you are not just a person who can write code, you are logical and clear, you have your own understanding and thinking about technology selection, middleware and projects, to put it bluntly, your offer is in play.